Wednesday, December 3, 2014

Here comes [Problem Solving] ---- Last Post

Here comes the LAST POST of my SLOG. I have learnt a lot this term and I hope we can all ACE THE FINAL EXAM and this will never happen...

Here is my problem solving of Penny piles. The problem says: 

You are sitting in front of two drawers.  The left drawer contains 64 pennies, the right drawer contains nothing.  Can you arrange things so that one of the drawers has 48 pennies, using combinations of the following two operations, l and r?


l: If the left drawer has an even number of pennies, you may transfer half of them to the right drawer.  If the left drawer has an odd number of pennies, operation l is disallowed.
r: If the right drawer has an even number of pennies, you may transfer half of them to the left drawer.  If the right drawer has an odd number of pennies, operation r is disallowed.
Choose another number in the range [0,64].  Starting from the same initial position, can you arrange things so that one of the drawers has that number of pennies? Are there any numbers in that range that are impossible to achieve? What about starting with a different number of pennies in the left drawer? 






1. UNDERSTANDING THE PROBLEM
Since this problem is long I would like to simplify the problem and gather the key information:
Beginning : (64, 0)
Try to get: (48, 16) or (16, 48) or any number in [0, 64]
By using combination of these two operations:
l: iff L = even  => (L/2, R+L/2)
r: iff R = even => (L+R/2, R/2)

2. DEVISING A PLAN
Since it only involves simple math, I can just write it down such as (64, 0) --> (32, 32) --> (16, 48) or (64, 0) --> (32, 32) --> (48, 16)
It is very easy to approach to (48, 16) or (16, 48)
How about any other numbers [0, 64] it is clearer to draw a tree diagram.

3. CARRYING OUT THE PLAN

The combinations which are not showing in the pictures are (21, 43) (53, 11) (13, 51) (45, 19) (29, 35) (61, 3). It can easily show that we can reach to any number in [0, 64] 

Since 64 pennies are not that many so we can write out all the combinations. What if there is a larger number we need to find another way to do it. The easiest way to check if we choose any number in [0, 64] see if the operation can reach to is going backwards.

Going backwards is doing this operation (L*2, R-L ) or (L-R, R*2)
For example, lets see if we can reach (14, 50)
(14, 50) --> ( 28, 36) --> (56, 8) --> (48, 16) --> (32, 32) --> (64, 0)

4. LOOKING BACK
In conclusion, we can reach whichever number we want in [0, 64] starting from the initial combination (64, 0). There are many other ways to solve this question. I have read http://cscblogger165.blogspot.ca/2014/10/week-7-polyas-approach-to-penny-piles.html
http://mycsc165courselog.blogspot.ca/2014/11/week-10-and-problem-solving-episode.html
These two problem solving episodes are very clear and thoughtful. I like using python programming to solve this question which is very smart and practical. 

Sunday, November 30, 2014

Week 12 [Computability] and [Countability]

In week 10's lecture we talked about halt. Here is the prove nobody can write halt(f, n). We assume we could write it and get the contradiction.
def clever(f):
    while halt(f, f):
        pass
    return 42
Now consider two cases: case 1: assume clever(clever) halts, then halt(clever, clever) is true, then entering and infinite loop, then clever(clever) does not halt; case 2: assume clever(clever) does not halt, then halt(clever, clever) is false, then return 42, then clever(clever) halts. We get contradiction in both cases, so we cannot write halt(f, n). We conclude halt(f, n) is well-defined and non-computable. ( 1. A function f is well-defined iff we can tell what f(x) is for every x in some domain. 2. A function f is computable iff it is well-defined and we can tell how to compute f(X) for every x n the domain.)

How do we know if the function is computable or not? We can use reductions. If function f(n) can be implemented by extending another function g(n), then we say f reduces to g. If g is computable then f is computable; if f is non-computable then g is non-computable.

Back to the halt example: a computable initialized would allow a computable halt. No way that's possible. All we need to show here is halt(f, n) reduces to initialized(f, v). In other words, implement halt(f, n) using initialized(f, v).



My friend Udara mentioned two good videos in his post about halt http://udaracsc165.blogspot.ca/2014/11/omega-theta-halt.html



In week 12's lecture, we use mapping f: X-->Y to explain well-defined: each x maps to a unique y; 1-1: each y mapped to by at most one x; onto : every y has some x maps to it.
We also talked about countable: any set A that satisfies size of A smaller or equal to size of N is countable. (Z is countable Q is countable R is uncountable).  Remember we only program countably infinitely many python functions; there are uncountably many functions that we cannot program in python.


Sunday, November 16, 2014

Week 10 [Big-Ω Proof]

Let's review the definition of  Ω(n^2) : a function f(n) is in Ω(n^2) iff there exists c in R+, there exists B in N, such that for all n in N, n>=B then f(n) >= c(n^2). It is the lower bound. From the definition we can find that for big-Omega proof, the most important part is also about picking c and B.

First example we talked in class is to prove n^2+n ∈ Ω(15n^2+3). There are two tips: tip 1:pick B =1 assume n >= 1; tip 2: pick a c small enough to make the right side an lower bound. There are some important tricks: under-estimation tricks: --> remove a positive term ( 3n^2+2n >= 3n^2) ; --> multiply a negative term (5n^2-n >= 5n^2 -nxn = 4n^2) over-estimation tricks: --> remove a neagtive term ( 3n^2-2n <= 3n^2) ; --> multiply a positive term (5n^2+n <= 5n^2 + nxn = 6n^2). For big-O proof, we over-estimate f(n) and under-estimate g(n); for big-Ω proof, we under-estimate f(n) and over-estimate g(n).

Then we proved some general statements about big-Oh. We introduce set F(The set of all functions that take a natural number as input and return a non-negative real number.) in this kind of proof. 
Lets talk about an example Prove for all f, g in F, f in O(g) => g in Ω(f). Intuition: if g grows no faster than g, then g grows no slower than f. Assume f in O(g): n>= B => f<= cg ; g in Ω(f): n >= B' => g>= c'f. We need to pick the right B', c' so try to find the relationship between B and B', c and c'.
Since f<=cg, g>= 1/c*f. So we can pick c' = 1/c B = B'.
Here is the proof: 

There is a very trick proof we talked about in class which is disproof :


 so we proof the negation of the statement. The trick part is to pick n wisely.

Thursday, November 6, 2014

Week 9 [Big-O Proof]

Let's review the definition of O(n^2) : a function f(n) is in O(n^2) iff there exists c in R+, there exists B in N, such that for all n in N, n>=B then f(n) <= c(n^2). From the definition we can find that for big-Oh proof, the most important part is about picking c and B. So here comes the question, how to pick c and B.

First example we talked in class is to prove 3n^2+2n ∈ O(n^2). There are two tips: tip 1: c should probably be larger than 3 (the constant factor of the highest-order term); tip 2: see what happens when n = 1=> 3n^2+2n=3+2=5=5n^2 so pick c=5 with B=1 is probably good. What happen when we add a constant? For example to prove 3n^2+2n+5 ∈ O(n^2). Same thing tip 1: c should be larger than 3; tip 2 see what happens when n =1=> 3n^2+2n+5=3+2+5=10=10n^2 so pick c=10 with B=1 is probably good.

After that we talked about a more complex example: to prove 7n^6 - 5n^4 +2n^3∈ O(6n^8-4n^5+n^2). The thinking process is:
                            6n^8-4n^5+n^2
--> assume n>=1       6n^8-4n^5
--> upper-bound         6n^8-4n^58 = 2n^8
the left side by overestimating
--> lower-bound        9n^6  <= 9/2 *(2n^8)
the right side by underestimating  7n^6 +2n^6 =9n^6
--> choose a c that connects the two bounds 7n^6 +2n^3
                                                                       7n^6 -5n^4+2n^3
The proof structure looks like:

Larry also talked about an example which is to prove n^3 O(3n^2). The process is to negate the definition first and then prove the negation.

The key is to figure out c and B. In general, larger c or larger B are good. These two examples are polynomials. How about non-polynomials? We use limit to prove it. The intuition is that if the ratio f(n)/g(n) approaches infinity when n becomes larger and larger that means f(n) grows faster than g(n).

Saturday, November 1, 2014

Week 8 Meet [O and Ω]

First of all, lets review definition of Big-O. The precise definition is "The set of functions that are eventually no more than f, to within a constant factor." If g belongs to O(f) then it means "g grows no faster than f (or equivalently f is an upper-bound for g).

When we compare g and f, we need to remember these three points:
1. count the number of steps;
2. constant factors don't matter;
3. only highest-order term matter.

Larry told us an example which was very easy to understand: "chicken size" is in O("turkey size") means a chicken grows slower than a turkey in the sense that, after a certain breakpoint, a chicken will always be smaller than a turkey.

Functions are in O(n^2): n^2, 2n^2+4n+1.99999, n^2/3.4890 +1999n


Ω, is the opposite of O. " If g belongs to Ω(f) then it means "g grows no slower than f (or equivalently f is an lower-bound for g).
Here are the formal definitions of O and Ω.
Functions are in Ω(n^2): 0.000001n^3, n^3-0.000001n









Then, let's analyse two algorithms: Insertion sort and Maximum slice.

We think about i which has n-1 iterations; then think about j which has iterations in worst case. Dont forget to add loop guard for each loop. Loop guard means you always check the first line of the loop at the end.




After comparing this two pictures, we understand that it is ok to over-estimate the number of steps of upper-bound and under-estimate the number of steps of lower-bound.