Since being convinced by Dr. Kevin Korb of Monash Uni to consider the merits of Bayesianism, I had to do some trip back to my stats. Naturally the statistical package R comes into the picture. I kinda like this tool – firstly it is not just a static package really, it is indeed a programming language and that spells lots of interesting and creative possibilities. Then it is based on Scheme/Lisp. The authors said that Scheme influenced the development of R.

I think the interface will be a lot of fun.

Below I will document the steps to install and use R within Scheme. The Scheme version that supports this is  Chicken Scheme.

Before jumping in, I wish to acknowledge the help by and thank Mr. Peter Danenberg aka “klutometis”,  the author of the R scm code, for his contribution to the Scheme and R user communities.

Here I am using Mac Os X, Lion.

1. Check that you have the Xcode tool set, then see to it that you install the command line tools.

2. See to it that you got Macports installed. Choose your OS X version from the site. You will use this to download Chicken Scheme.

3. When Macports finished installing see to it that you do

sudo port selfupdate

4. Now install chicken

sudo port install chicken

5. Make sure you install cock as well as cock-utils for documentation of Chicken Scheme. Do this

sudo chicken-install cock

then

sudo chicken-install cock-utils

5. Now install R interface (note– I assume you have already installed R proper even before we got to step 1 above).

sudo chicken-install R

You will get a segfault but ignore this. See picture, click to view. Sorry for the lack of scale, I have no time to make it pretty right now.

6. Run csi (the interactive REPL of Chicken Scheme) then issue (use R), within the REPL. Libraries will be loaded and in the end you will get a warning about R_HOME not being set, ignore this or set it. You are done. You can now use R within Scheme. See what R stuff you can issue within Scheme by clicking here.

I have been reflecting philosophically on the idea of random variables. For example, let us say we want to find the height of people attending a particular university. When one decides to examine this data statistically, one is by default making an apriori presupposition that the height of people in that said university is random. What does that mean? It means by default one is presupposing that the height of people are equally likely. Let us call this variable H. By that presupposition, one is assuming that H lacks pattern, or the pattern is uncertain. In some other place, maybe in this blog, I observed that human beings call things random as a bucket for chucking in data whose behaviour we do not know. We do not know or may be we just are not bothering to know?

Take the case of H. Assuming H is random is the same as saying, well we won’t look at other factors in detail. It is because it is hard work. We can for sure, do some regression analysis of H versus say age A and find the relationship between the two. Perhaps height depends on the age of the student? Finding those independent variables is just hard work. Looking at things random is just convenient and expedient. Looking at things random also says we are committed to living in the world of probabilities. Should we jump from this cliff easily?

Yet, some aspects of our world is not governed by probabilities. For example, when mathematicians assert that the sum of internal angels of a triangle is 180 degrees, they did not do this statistically. They did not sample every triangle in the world, measure their internal angles and after examining lots of them made a conclusion that their sum is 180 degrees. It did not happen that way. This judgement was arrived at axiomatically and deductively not inductively.

Events that are considered random may not be random after all. Some results of chaos theory say that certain erratic phenomena in reality follow a certain order.

An example of this is the flipping of a coin. In general, we assume that a coin is not biased and we say that it has equal likelihood of coming up heads as tails. Study shows that this generally held view which is really an ideal view, is not attainable. It is not statistics that governs this phenomenon but physics, and rightly so. In this paper, it shows that the act of flipping a coin and its result depends on its initial conditions. The result of this paper is not entirely unusual because clearly our intuition already says that the act of flipping a coin is dependent on mechanics. Our hugh school physics intuition agrees with that. Where I differ in opinion from this result is that I would deny there is such a thing as an ideal or classic view in practice.

So back again to the variable H. With H therefore we are assuming we will work on its statistics rather than on its genetics; working on the latter is just too hard to do. We will disregard perhaps the students ethnic background, age, diet, health condition etc. and other millions of factors that could affect the height of a student. So working with events make us consider them random, not because they really are like that, dependent of chance; but because finding out the factors that affect them is just too darn hard to do

Just to let you know I can hack if needed, this post will serve as note to self on how I installed Mac OS X on an old HP DC7800 desktop. This is the second Mac OS X that I have installed; the first was with an ASUS netbook EEE PC 1000h. A university friend said they were throwing away their old PCs so I asked if he could throw one on my side of the fence,  so he did.  I figured why not install a Mac OS X? So I ventured to install it on a non-Apple desktop.

>

My Mac OS X running on none Apple H/W

First refer to one of the methods found in this guide. Hence, install the Mac OS X first.

Then you need to know that the:

a.) sound will come from the back earphone port jack,

b.)  standard monitor performance coming from the on board graphics processor will not show the best image.

So, decide to have a bit of investment. Do not worry; you do not have to spend more than \$60 bucks on these:

a.)    Buy a USB audio stick or audio dongle

b.)    Buy a Gigabyte GeForce 210

Before installing these devices, first remember to remove NVEnabler.kext from the /System/Library directory or wherever it is found. Put all the above hardware and boot up your DC7800 and it should work as normal, you will now have superb video graphics experience.

UPDATED

Let’s be real, Java has taken off. At least where I live, there are 3 times more Java programming positions as C++ or C# ones. I ran a simple search using a popular job search web site and that is what the stats showed.

Someone said comically that C++ was made for bad programmers so that they do not hurt themselves. Then Java came along to transition C++ programmers so that bad programmers are even shielded from inflicting harm on themselves. Now Java has momentum and from a functional programming point of view, it will be hard to convince software development managers to wean them from the comfort that Java provides. No one in her/his right mind would ditch that investment, that solid code base, and go functional. That suggestion just won’t fly.

We know Clojure and Scala are two FP languages that have been created to supply functional facilities without straying too far from the JVM.Between the two, the typical Java programmer not familiar with FP will experience a strong culture shock if he/she hops into Clojure because Clojure syntax is Lisp – it works with S-expressions so the transition here is abrupt. On the other hand Scala’s syntax follows close to Java and so the transition here is soft or smoother. The typical Java based outfit will likely go this route. However that is where the rub happens to be. Scala allows you to program OO and of course it is a good thing but if a Java programmer is going to use Scala’s OO then what is the point of that? Then just stick to Java. Something like this happened when C++ was introduced, programmers still programmed procedurally even though the OO facility was there.

It is the momentum, it is hard to stop programming in one paradigm and then switching to another. The force of habit is great and hard to break and may be carried over to the new one. I’d say bite the bullet, do the paradigm, do cold turkey on your programming language drop your style and adopt what is being supplied. If you are going the Scala route, yet sticking with the OO aspect of it, you are wasting code time, stick to Java – that will save you headache. On the other hand if you are set free from your OO inhibitions and you are willing to let go of your comfort zone, then Clojure and Scala are now in equal footing. Both will allow you to call old Java codes and API within the language so I’d say take the plunge.

Lastly, good luck to those going the Groovy route. That is another language not really going anywhere so much so that even the starter of that language lost interest even before version 1.0 went off the press. Personally I won’t bother.

UPDATE: Go to Dr. Dobb’s article t o find out which languages gained or lost adherents in 2012. Click here

Let me give you a short exercise. Let’s assume that we are to write a method (a function) that will repeat or double  with whatever is given to it. For example if we pass it a string, the method replicates the string. If it is given a number of some sort, it will double it.

In Java this will be done using method overloading. Something like this is done.

public int mydoubler (int x) {
return x + x;
}
public double mydoubler (double x) {
return x + x;
}
public String mydoubler (String x) {
return x + x;
}

What about in C++? Well you will probably do a function template for this. Like

template <class T>

T mydoubler (T x) {

T result;

result = x + x;

return (result);

}

Then in the main you will have something like this…

double I = mydoubler<double>(1.0);

Now see how we can do this in Scheme/Lisp (inspired from Dr. Racket Guide)

(define (mydouble x)

((if (string? x) string-append +) x x))

Notice how concise this is at least compared to Java.  The advantabe to C++ in terms of brevity is just slightly better but when I call this function in Scheme/Lisp I simply do this

(mydouble 2)

(mydouble “ha”)

etc etc

I do not have to declare anything like defining I  as shown in the above  C++ code.

So what is the moral of this story?

Please do not get me wrong,  I do not mean to be rude. I owe a lot from C++ and Java, I earned a good living for more than 12 years on them and I am still paid for Java programming today on a casual basis.

My point is that most of the stuff we see in modern programming languages have been borrowed from the old one – LISP. For example, javadoc as a concept has already been present in LISP for many years, that is where Java got the idea from. Then we see new languages today like Ruby borrowing concepts  from the old one.  So the question is, if you are going to invent a new programming language and borrow from the old one, what the heck was wrong with the old one since you keep on borrowing from it and embedding its features into your new language?

According  to this article, functional programming is on the rise. I hope that becomes true because it offers a lot more excitement, elegance and creativity to programmers. That is where I am right now.

Symbols of infinity from Wikipaedia

I have been exploring the idea that infinite sets might not exist. This is suggested by Prof. N. Wildberger found in his youtube sermons here. It is interesting that the preaching ties the existence of an infinite set to our ability to write it down. Is this some kind of extreme construtivist teaching or something?

Indeed Georg Cantor must have been considered a heretic when he preached on the idea of infinite sets in the late 19th century. Today, for sure it is part of math’s orthodox doctrine.
It so happened that I am studying model theory too and as I reflected on this, I asked myself what model theory might say about this issue.

Getting out my copy of D. Marker’s Model Theory: An Introduction I accidentally  stumbled on his Example 1.2.1 (Infinite Sets) and it goes like this…

Consider the language $\mathcal{L}={0}$, that means no functions, relations nor constants. Consider now the theory of  $\mathcal{L}$ where we have, for each $n$, the sentence $\phi_n=\exists x_1 \exists x_2 \exists x_3...\exists x_n \bigwedge\limits_{i \leq j < n}x_i \neq x_j$.  This says that there are at least $n$ distinct elements. Then a structure $\mathcal{M}$ is a model for this theory iff the carrier set/universe of $\mathcal{M}$, called $M$ is infinite.

This is not obvious and the example showed no proof. However, I will try to show why $M$ must be infinite.

Proof:

($\Rightarrow$)

We only prove in this direction. For a contradiction, assume there is a model $\mathcal{M} \models T=\{ \phi_n\mid n \in \mathbb{N} \}$ but $M$ is finite.

$M$ being finite implies it is of finite cardinality $k$, i.e., $|M| = k$. We know that

$\forall k \in \mathbb{N}$ , there $\exists q \in \mathbb{N}$ such that $q > k$. For example, $q = k + 1$ . I think this step is often called “without loss to generality”;-)

Then we can always set $n = q$. Consider now the definition of satisfiability,  $\mathcal{M}$ a model of $T$. By definition of satisfaction this implies $\exists a_1, \exists a_2, \exists a_3,...\exists a_q . \mathcal{M} \models\bigwedge\limits_{i \leq j < k + 1}a_i \neq a_j$. This says that $\exists a_q$,  which  says there is an $a_{k+1}$th element in $M$. This is a contradiction, for we said that the number of elements of $M$ is $k$. This implies that $\phi_n$ can not be satisfied by the structure $\mathcal{M}$, thus it is not a model, a contradiction.

$\therefore \; M$ must be infinite.

The proof for ($\Leftarrow$) follows a similar line.

Q.E.D.

The professor questions the proof

I do part time teaching and one of the reasons why I enjoy it very much is because I get to learn how young people think. They are very creative and I am fortunate to encounter some of these smart kids in class.

This is the end of the semester and I happen to do some marking for one of the professors in one of the universities where I work. The subject is Theory of Computing and one of the questions the professor asked in exam was…

Q. Prove that the Halting Problem is undecidable.

Here are a couple of proving techniques employed by some of our young people.

Proof:

Alan Turning says so. Q.E.D.

Proof:

Our lecture notes state it is true.

Our lecture notes are assumed to be true.

$\therefore$ the Halting Problem is undecidable? Q.E.D.

Proof:

If a program knew it was going to crash, it would never have run it. Q.E.D.

————

I mean, how could you fault these answers, they have a point! LOL. :-)

Do share if you have some examples of this type of proving technique, I will only be too happy to read them.