Tuesday, December 2, 2014

Tribute to my teammates (SLOGs included)

My teammate for the assignments: Simon and Sunny have been some great support for me in this semester, their quality of work and determined spirit are superb, and here are their slogs/projects we have worked on in this semester!

ALSO THANKS TO WHOEVER +1'ed MY POST!

ALSO THANKS TO THE ETHIOPIAN THAT VISITED MY BLOG, APPARENTLY ITS WORLDWIDE PHENOMENON NOW! YOU ARE SPECIAL!


Simon: http://csc165f1.blogspot.ca

Sunny: http://sunnywangthebrilliantone.blogspot.ca




 A1: our first time collaboration, we got 100% on this one!


A2: it's much harder than A1, but we still got like 20% higher than class average.


A3: our last masterpiece, it's not graded yet, but I believe it's above 90%, it's great!


Sunday, November 30, 2014

Computability and countability

This may well be the last blog post of this semester, the course has been amazing. Our group's solution for assignment 3 has been finalized, and I'm 95% sure they are all correct, one point for that. On another front, the lecture has been about computability and countability, something I can barely understand on the first try. These topics have been the most abstract of all the topics we've covered in CSC165, including big O, logic and proof. Final exam is coming, I wish everyone good luck.

Computers are basically perfect logical number-adding machines, they operate based on logic and instructions, with the power of mathematics. The power and speed of a computer can make even the most demanding calculation for human look easy, therefore we often assume computers are better at human. However, this is not wholly correct, as there are something that's really difficult for computer to solve, like: what do I feel? will this program halt?

The halting problem make me laugh, and the proof that no one can write a halt function amazed me. Basically, we assume someone can write a halt function, then using the halt function to bring a function into an infinite loop, making it non-halting, vice versa to create a contradiction. This proof is constructed by Church and Turing, two of the greatest computer scientists.

We then defined the terms "well-defined" and "computable"
We can often reduce a function F to another function G, by returning G in the definition of F, then we get two implications:

G computable => F computable, because if we can compute G, and F can be represented with G, then F is computable, and

F non-computable => G non-computable, because if F is non-computable, there is no way of writing a G that is computable, since F can be written with G.

When we prove something is non-computable, we just have to reduce a non-computable function, in our case, halt, to that function.

Countability is even more messy, we say something is countable if we can enumerate that thing, for example, natural number and rational numbers are countable, but real number is not. Anything that has a smaller size than N is countable.

How many python functions can be written?
countably infinitely many, because python is written with UTF-8 characters, therefore we can visualize all python functions as natural numbers, making it countable, however, there are uncountably infinitely many functions, as there are non-progr
amable functions such as halt, something we can prove using diagonalization.


Diagonalization guarantees at least a mismatch at every row, making it not in the table of programmable functions, which proves there are more programs than there are programmable programs.

The course has been amazing, I learned a whole lot about computers from a different perspective, as they say math and logics perspective, instead of the traditional notion that computer science is all about coding and programming. This course has provided me with important knowledges going into future years, I'm feeling great! Have a good year!

Thursday, October 30, 2014

Week 7 and 8 - Big OOOOOOOOOOO

For the last two weeks we have been talking about a brand new topic - algorithm analysis and big O. What do we want from these complicated stuff? We want to see how our program runtime depend on our input sizes, what if the input size grows really large, how would our program run time fare up? 

Ideally, we want our program to run as fast with a small input as with a huge input, and to eliminate the cases that it grows exponentially, therefore, we must come up with a way to measure our program runtime. It is called big O.
Because we can generate an approximated runtime formula from our code, we can calculate the level of growth it has, for example x^2 + 100x +5, we can try to think what would happen if the input size x is really huge, almost if it is approaching infinity. If we set lim x->inf, the 100x and 5 hardly matters compared to x^2, they matter so little that in high school calculus we simply eliminates them. This is also how we handles it here, the function we had grows at a level of x^2, it would not grow much faster or slower compared to it, so we dubbed it O(x^2) and Omega(x^2), and if a function belong to both O and Omega, we call it theta.

A precise definition of big O is 

that if we can choose a multiplier and a breaking point so that a function f upper bounds our function g after that point, our function grows no faster than f, It is big-Oed by f.
for Omega, we just have to change the <= sign to >= sign.
When we estimate the runtime of a function, we do it by evaluating the runtime of the inner loop first,  then come out one by one, we can slightly overestimate when we are trying to find big O, because if the over-estimated runtime doesn't grow faster than a function, our original sure doesn't. The opposite goes for Omega, we underestimate the runtime so if our underestimation grows no slower than f, our original which grows >= underestimation definitely grows no slower than f.

Monday, October 20, 2014

Folding

It is a fine autumn dusk, I am sitting in a knowledge-filled hall in an old, prestigious university, holding a piece of paper, titled

Folding

folding is an interesting problem, I've played with papers when I was little, and folding them seemed so fun, until I dive into patterns. The task of folding is basically to fold a paper...... and generalize a pattern of creases.

*This is to present a different explanation to the one we talked about on class, which involves recursion, in "an up becomes ^^v, and down becomes vv^"*

The pattern wasn't hard to observe, we called a ^ an up, a v a down. Basically, the pattern is, if you count left, right, in-between as spaces, if you fold it again, the spaces are filled with up, down, up down, up down sequence.

CSC165 - Week Five and Six - Test & Proof

Did not create a blog post cuz there was a test,
Tests are always like bombs, right?
We talked about all kinds of proofs, but that's not the focus...
The test is.

Okay, the test wasn't that hard, it feels more like a 'welcome to university, see? you can do well' test, the class average is at 80%+, which is a very generous mark, even when compared to some of my high school tests *cough* that averaged at 60%.

Proofs are the primary indicator of the rigorous part of mathematics and computer science theories, they are the foundation to every theorem we use today. In a short sentence, Proofs are to be written in a completely sealed format, variables are declared and initialized in the front, assumptions are made, deductions are written indented after the assumption, the structure of proof can be easily related to...


Like every discussion thread, a proof looks like a completed indented block.
We also learned contrapositive proofs and proofs by contradiction, and touched on limit.

Contrapositive proof:
Biggest hint to contrapositive proof: contrapositive and original is equivalent
wtf is that a hint? Yes, that means proving contrapositive equals to proving original, ... Oh yea cause they are the same. When proving contrapositive is easier than proving the original, using contrapositive proof is a really smart approach.

Proof by contradiction:
This is harder to comprehend compared to other proofs, basically in this one P => Q you assume Q to be false, then by smart manipulation you get Q is true, which contradicts Q is false, which means Q must be true.

Proof something is false == proving its negation to be true, using above methods.

Limits:
Greek..
epsilons and deltas are like nightmares that never goes away, we defined limit in a rigorous sense, for every little distance to f(x), there must be a little range from x that if x is inside that range, f(x) is inside its boundary.
To prove limit, we have to write out the actual delta epsilon equations, and by manipulating the epsilon part, we seek to get ourselves some delta, so that we can see how small the delta should be so that epsilon is satisfied, usually the value of delta is related to epsilon.

Saturday, October 4, 2014

CSC165 - Week four - Bi-implications, Transitivity, Mixed Quantifiers, Proof

It's the fourth week into my university journey, and 1/3 into the semester, who would have believed that?
In this week's lecture, we learned bi-implication, transitivity, mixed quantifier and proof.
Bi implication is just a two-way if...then statement.
P(x) <==> Q(x)
 We turned it into a conjunction of two disjunctions
not P(x) or Q(x) and not Q(x) or P(x)
, and a disjunction of two conjunctions.
not P(x) and not Q(x) or P(x) and Q(x)

We looked at transitivity, which means when p is related to q, and q is related to r, then p is related to r. We learned that transitivity is always true by reducing its negation to a contradiction.
((P=>Q)^(Q=>R)) => (P=>R)
neg= (P=>Q) and (Q=>R) or not(P=>R)
=(not P or Q) and (not Q or R) or (P and R)
which is a contradiction

Next up we talked about mixed quantifiers, and how when we mix two different quantifiers, the statement's meaning is twisted when we change the order of the quantifiers. 
for example:
let x,y be real numbers,
1) for all x, there exists a y, y>x
2) there exists a y, for all x, y>x
these two sentences have the same quantifiers, but because we
changed the order of the quantifiers, they mean different things.
1) for every real number, there is a bigger number
2) there is a real number greater than all other real numbers
obviously we can see that statement 1 is true and statement 2 is false,
the only difference is the order of the quantifiers

We finished the lecture by talking about proof, proof is an extremely important concept in science, nothing is to be believed until its proven.
a proof can be worked top-down or bottom-up, and if we can connect some statement like linking some nodes and connect top and bottom, we can prove it.

Friday, September 26, 2014

CSC165 - Week Three - Conjunction, Disjunction, Negation, Truth Table and Law

The stuff in the title is exactly what we learned this week. We began by talking about conjunction, "AND", the symbol for conjunction is ^. it's a function to combine two statements and claim they are both true, conjunction can be seen as the intersection of the statements. Conjunction is only true when both statements are true. "think about the intersection"

(http://upload.wikimedia.org/wikipedia/commons/9/99/Venn0001.svg)

Next we talked about disjunction, which is close to conjunction except whenever one of the two statements is true, the disjunction is true. Disjunction can be seen as the union of the statements, and the symbol for disjunction is v. Disjunction is false only when both statements are false, it is true whenever one of the statements is true.

(http://upload.wikimedia.org/wikipedia/commons/3/30/Venn0111.svg)

Next up we talked about negation, negation is saying something opposite of a statement, it can be represented with NOT(statement), or ¬. Negation of the statement A is ¬A. An example of negation "All ferrari are red": we want to say something opposite of the statement "All ferrari are red", what is the opposite for the statement? "there are some ferrari that's not red" In venn diagram, negation can be represented as everything that's not the statement.

When we are depicting the truth value of only two statements, it can be easily done with a venn diagram, however, when we increase the number of variables, venn diagrams can be a real pain, it is too messy to see the truth value of a specific combination.
(http://www.newscientist.com/data/images/ns/cms/mg21528774.800/mg21528774.800-2_500.jpg)

It is beautiful, but it convey the truth value in an efficient fashion. Therefore, when we increase the number of statements or variables, we need a better way to show the truth value beside a venn diagram. This is when truth table comes into play.


In the truth table, the truth value of some function can be easily found. Truth tables seek to enumerate over the outputs of all possible combinations of inputs. In a truth table, the number of rows is directly exponential to the number of inputs. 
In the end we talked about laws and manipulations. They are like laws in math, but with boolean algebra, an example of the laws would be demorgan's law.