• Visualizing the Turing Tarpit

    Jason Hemann and I recently had a paper accepted at FARM called “Visualizing the Turing Tarpit.” The idea grew out of a talk that Jason did at our weekly PL Wonks seminar on the minimalist programming languages, Iota and Jot. At the end of the talk, Ken Shan asked whether this could be used to do some kind of cool fractal visualization of programs. That night, several of us pulled out our computers and started hacking on Iota and Jot interpreters.

  • Why Write Compilers in Scheme?

    One of the questions Klint Finley asked me for the Wired article about Harlan was “Why Scheme?” I wasn’t really satisfied with my answer, so I thought I’d answer it more completely here. Besides the fact that we were already experienced writing compilers in Scheme, Scheme has a lot of features that are very useful to compiler writers.

  • Why is Harlan called Harlan?

    One of the more unexpected things to have happened after releasing Harlan was that I was contacted by a couple of people who are named Harlan. One of the common questions about Harlan is actually where the name comes from, so I thought I’d take the time to tell the story here.

  • Announcing the release of Harlan

    I am happy to announce that after about two years of work, I have made the code for Harlan available to the public.

  • What is Macro Hygiene?

    One important, though surprisingly uncommon, feature of macro systems is that of hygiene. I mentioned in a previous post that I would eventually say something about hygiene. It turns out macro hygiene is somewhat tricky to define precisely, and I know a couple of people who are actively working on a formal definition of hygiene. The intuition behind hygiene isn’t too bad though. Basically, we want our macros to not break our code. So how can macros break code?

  • Some Simple GPU Optimizations

    One of the goals of designing a high level GPU programming language is to allow the compiler to perform optimizations on your code. One optimization we’ve been doing for a while in Harlan is one I’ve been calling “kernel fusion.” This is a pretty obvious transformation to do, and many other GPU languages do it. However, kernel fusion comes in several different variants that I’d like to discuss.

  • Using Scheme with Travis CI

    Early on in the development of the Harlan compiler, my collaborators and I realized we were spending a lot of time writing compilers that translate Scheme-like languages into C or C++. A lot of this code should be common between projects, so we decided to factor some of this code into the Elegant Weapons project. Elegant Weapons even had a trivial test suite. Unfortunately, because the primary consumer of Elegant Weapons was Harlan, the design was still far to specific to Harlan. As we realized when Nada Amin submitted a fix for the Elegant Weapons tests, we weren’t even running our own tests anymore. Clearly we needed to do something better if Elegant Weapons were truly going to be a project worthy of existing on its own.

  • Some Picky Presentation Tips

    I just spent the last week at IPDPS in Boston. It was a good time. I got to meet a few new people, and connect with a lot of friends who are now living in the Boston area. I also presented our work on Rust for the GPU at HIPS. In the course of watching a lot of presentations, I came up with a few tips. I admit I did not follow all of these in my own presentation, but hopefully all of us can learn from these.

  • Data Parallel Operators

    In my previous post, we discussed some of the data structures that support data parallel programming. Now we’ll turn our attention to the common operators that manipulate these data structures. I’ll discuss several of them: map, reduce, scan, permute, back-permute and filter.

  • Data Parallel Data Structures

    Data parallelism is a style of programming where essentially the same operation is applied to a large collection of values. This style became popular during the 80s and early 90s as a convenient way of programming large vector processors. Data parallelism has remained popular, especially in light of the rise of GPGPU programming. Often, data parallel programming is used for fine-grained parallelism, but it works at larger granularity too. For example, MapReduce is a restricted example of data parallelism.