Monday, May 9, 2011

While trying to write concurrent code that forks off threads that sleep before doing some delayed actions using GHCi, version 6.10.4, which is a pretty old version, I found some strange behavior. I used map (flip addUTCTime time) [1..5] to generate the times that the delayed actions should be performed, and the interpreter would lock up. When I changed it to map (flip addUTCTime time) [1,2,3,4,5], everything worked as expected. Maybe there is something tricky in the implementation of enumFromTo :: NominalDiffTime -> NominalDiffTime -> [NominalDiffTime]

Monday, April 25, 2011

I started using Control.Concurrent to write concurrent code in Haskell. It's easier than Java's concurrency model, since values are immutable in Haskell, while one has to worry about values being changed by other threads in Java, which means having to use locks in the right places, and knowing how to use the java.util.concurrent classes.

Of course, it's still possible to deadlock in Haskell, such as with do { a <- takeMVar ma; b <- readMVar mb; putMVar ma a } and do { b <- takeMVar mb; a <- readMVar ma; putMVar mb b }.

Also, there is no getting around dealing with external concurrency issues, such as with databases.

Monday, April 11, 2011

Thinking about real numbers in the 01_ programming language, the fractional part can be represented as big endian base 1/2. (Or is it little endian? Bits to the left represent larger numbers, but smaller powers of 1/2.) Infinite lists of bits can represent numbers that are ≥ 0 and ≤ 1.

Addition of fractional numbers can be defined as

+/fractional 0a 0b = +/fractional/carry a b 0_ 1_ +/fractional a b.
+/fractional 1a 0b = +/fractional/carry a b 1_ 0_ +/fractional a b.
+/fractional 0a 1b = +/fractional/carry a b 1_ 0_ +/fractional a b.
+/fractional 1a 1b = +/fractional/carry a b 0_ 1_ +/fractional a b.

where evaluating the carry of fractional addition is

+/fractional/carry 0a 0b carry-zero carry-one = carry-zero.
+/fractional/carry 1a 0b carry-zero carry-one = +/fractional/carry a b carry-zero carry-one.
+/fractional/carry 0a 1b carry-zero carry-one = +/fractional/carry a b carry-zero carry-one.
+/fractional/carry 1a 1b carry-zero carry-one = carry-one.

And the subtraction of fractional numbers is

-/fractional 0a 0b = -/fractional/borrow a b 0_ 1_ -/fractional a b.
-/fractional 1a 0b = -/fractional/borrow a b 1_ 0_ -/fractional a b.
-/fractional 0a 1b = -/fractional/borrow a b 1_ 0_ -/fractional a b.
-/fractional 1a 1b = -/fractional/borrow a b 0_ 1_ -/fractional a b.

where evaluating the borrow of fractional subtraction is

-/fractional/borrow 0a 0b borrow-zero borrow-one = -/fractional/borrow a b borrow-zero borrow-one.
-/fractional/borrow 1a 0b borrow-zero borrow-one = borrow-zero.
-/fractional/borrow 0a 1b borrow-zero borrow-one = borrow-one.
-/fractional/borrow 1a 1b borrow-zero borrow-one = -/fractional/borrow a b borrow-zero borrow-one.

Unlike the addition and subtraction of integers, these operations, in general, require infinite time and memory to calculate a finite number of bits, due to the carry and borrow.

Monday, March 28, 2011

Thinking about numbers in the 01_ programming language, the natural way to represent integers would be to use little-endian base 2. To further simplify things, consider only infinite lists of bits. So the important numbers are

zero = 0 zero.
one = 1 zero.

Negative numbers can also be represented

-one = 1 -one.

Integer addition can be defined as

+/integer 0a 0b = 0 +/integer a b.
+/integer 1a 0b = 1 +/integer a b.
+/integer 0a 1b = 1 +/integer a b.
+/integer 1a 1b = 0 +/integer/carry a b.

where integer addition with carry is

+/integer/carry 0a 0b = 1 +/integer a b.
+/integer/carry 1a 0b = 0 +/integer/carry a b.
+/integer/carry 0a 1b = 0 +/integer/carry a b.
+/integer/carry 1a 1b = 1 +/integer/carry a b.

And integer subtraction is

-/integer 0a 0b = 0 -/integer a b.
-/integer 1a 0b = 1 -/integer a b.
-/integer 0a 1b = 1 -/integer/borrow a b.
-/integer 1a 1b = 0 -/integer a b.

where integer subtraction with borrow is

-/integer/borrow 0a 0b = 1 -/integer/borrow a b.
-/integer/borrow 1a 0b = 0 -/integer a b.
-/integer/borrow 0a 1b = 0 -/integer/borrow a b.
-/integer/borrow 1a 1b = 1 -/integer/borrow a b.

Monday, March 14, 2011

For work, some of the code is in JSTL (JavaServer Pages Standard Template Library) EL (Expression Language). JSTL EL is weakly typed and dynamically typed. There is no compile-time checking.

One day, a coworker sent me a message saying some stuff stopped working after merging in some of my changes. So I tried running it and it didn't work. I also added logging to the code I changed, which was all Java, and it was working fine. I then tracked it down to some JSTL (untouched by me):

<c:set var="flag" value="{flag1 || flag2}"/>
...
<c:if test="${flag}">
... stuff that failed to appear ...
</c:if>

The first line should have been

<c:set var="flag" value="${flag1 || flag2}"/>

This is the type of stupid mistake that compile-time checking, especially with static typing, can catch.

Monday, February 28, 2011

I wrote a compiler for 01_ to LLVM (Low Level Virtual Machine) in a week of weekends and evenings. LLVM's static type-checking caught numerous silly mistakes. However, I got bit twice because LLVM does not warn, at least with the default option, when the calling convention declaration of the caller and callee do not match. (I use fastcc because tail-call elimination is important for 01_ programs, and failed to specify it in the caller those two times.) This seems like something that could be checked by the computer and reminds me why I prefer using statically typed programming languages over dynamically typed programming languages.

I wrote the parser in one evening, which I had done before, so it was mostly a matter of getting reacquainted with the Parsec parsing library.

I spent another evening and a weekend learning the LLVM assembly language and writing the runtime library.

I spent another couple of evenings writing the code generator.

I spent the last evening chasing down memory leaks in the generated code.

The code is available at github.com.

Monday, February 14, 2011

I had been thinking about compiling 01_ to LLVM for a while, and finally decided to get started on it by playing around with LLVM assembly language. One thing I like about LLVM assembly language is the static type checking. Anyhow, I started writing a runtime library for 01_. The only data type in 01_ is a lazy list of bits. The data type does not permit circular references, so I'll use reference counting garbage collection.

Here's my first stab at the data type for 01_ values:

%val = type { i32, i1, %val*, { i1, %.val* } (i8*)*, void (i8*)*, i8* }

which, in a C-like syntax would be:

struct val {
int refcount;
bool bit;
struct val *next;
  struct { bool bit, struct val *next } (void*) *eval;
void (void *) *free_env;
void *env;
};

where bit and next are undefined and eval is non-null for unevaluated promises, and bit is undefined and next is null and eval is null for values evaluating to nil, and bit contains the bit value and next points to the next value and eval is null for non-nil values. That's a pretty large data structure for a single element of a bit list. I could shrink it by the size of a pointer by using the same location for next and env and casting, as env and free_env are never valid at the same time as next and bit. I won't do that, though, because it would make the code less clear, and having more understandable code is more important to me in this project.