Blog

Why Go Is Not Good

I like Go. I use it for a number of things (including this blog, at the time of writing). Go is useful. With that said, Go is not a good language. It's not bad; it's just not good.

We have to be careful using languages that aren't good, because if we're not careful, we might end up stuck using them for the next 20 years.

This is a list of my chief complaints about Go. Some of these are mentioned frequently, and some are rarely discussed.

I've also included some comparisons to both Rust and Haskell (which I consider to be good languages). This is to show that all the problems listed here have already been solved.

Generic Programming

The Problem

We want to write code that we can use for lots of different things. If I write a function to sum a list of numbers, it would be nice if I could use it on lists of floats, lists of ints, and lists of anything else that can be summed. It would be extra nice if this code also maintained the type safety and speed of writing separate functions for adding lists of ints, lists of floats, etc.

A Good Solution: Constraint Based Generics and Parametric Polymorphism

I think the best generic programming system we have right now is what both Rust and Haskell use. This is often referred to as a system of "constrained types". In Haskell, this system is called "type classes", and in Rust it's called "traits". It works like this:

(Rust, version 0.11)

fn id<T>(item: T) -> T {
   item
}

(Haskell)

id :: t -> t
id a = a

For this toy example, we've defined a generic function id, which takes some thing and returns that same thing. The cool thing is that id works on any thing, not just things of a certain type. In both Haskell and Rust, id preserves the type information of any variables passed into it, preserves static type safety, and has no extra runtime overhead from genericism. You can imagine using something just like this to write a clone function.

We can also use this to make generic data structures. For example,

(Rust)

struct Stack<T>{
   items: Vec<T>
}

(Haskell)

data Stack t = Stack [t]

Again, we get complete static type safety and no runtime overhead from generic programming.

Now, if we want to write a generic function that actually does something to its arguments, we may need to somehow tell the compiler "only allow this function to work on arguments that allows this to be done to them". For example, if we wanted to make a function that added the three arguments and returned the sum, we need to tell the compiler that any arguments to the function must support addition. That looks like this:

(Rust)

fn add3<T:Num>(a:T, b:T, c:T)->T{
   a + b + c
}

(Haskell)

add3 :: Num t => t -> t -> t -> t
add3 a b c = a + b + c

What we're telling the compiler is "The arguments to add3 can be of any type t, with the constraint that t must be a Num (Numeric type)." Because the compiler knows that Nums can be added, this code passes type checks. These constraints can also be used in data structure definitions. This is an elegant and easy way of doing 100% type-safe and flexible generic programming.

Go's Solution: interface{}

As a consequence of Go's mediocre type system, Go has very bad support for generic programming.

You can write generic functions easily enough. Let's say you wanted to write a function that printed a hash code for objects that could be hashed. You can define an interface that allows you to do this with static type safety guarantees, like this:

(Go)

type Hashable interface {
  Hash() []byte
}

func printHash(item Hashable) {
   fmt.Println(item.Hash())
}

Now, you can supply any Hashable object to printHash, and you also get static type checking. This is good.

What if you wanted to write a generic data structure? Let's write a simple Linked List. The idiomatic way to write a generic data structure in Go is

(Go)

type LinkedList struct {
   value interface{}
   next *LinkedList
}

func (oldNode *LinkedList) prepend(value interface{}) *LinkedList {
   return &LinkedList{value, oldNode}
}

func tail(value interface{}) *LinkedList {
   return &LinkedList{value, nil}
}

func traverse(ll *LinkedList) {
   if ll == nil {
     return
   }
   fmt.Println(ll.value)
   traverse(ll.next)
}

func main() {
   node := tail(5).prepend(6).prepend(7)
   traverse(node)
}

Notice anything? The type of value is interface{}. interface{} is what we call the "top type", meaning that all other types are subtypes of interface{}. This is roughly equivalent to Object in Java. Yikes. (Note: There is some disagreement on whether a top type actually exists in Go, since Go claims to have no subtyping. Regardless, the analogy holds.)

The "correct" way to build generic data structures in Go is to cast things to the top type and then put them in the data structure. This is how Java used to work, circa 2004. Then people realized that this completely defeated the purpose of type systems. When you have data structures like this, you completely eliminate any of the benefits that a type system provides. For example, this is perfectly valid code:

node := tail(5).prepend("Hello").prepend([]byte{1,2,3,4})

Which makes absolutely no sense in a well structured program. You might be expecting a linked list of integers, but at some point, some tired, coffee-fueled programmer with a deadline to meet accidentally put a string in there somewhere. Because generic data structures in Go don't know the type of their values, the Go compiler won't correct the programmer, and your program will simply crash when you try to down-cast from interface{}.

The same issues apply to any generic data structure, whether it's a list, map, graph, tree, queue, etc.

Language Extensibility

The Problem

High-level languages often have keywords and symbols that are shorthand for relatively complex tasks. For example, in many languages, there is a shorthand for iterating over all the elements in a collection of data, like an array.

(Java)

for (String name : names) { ... }

(Python)

for name in names: ...

What would be nice is if we could use that syntax to operate on any type of collection - not just those built in to the language (like arrays).

What would also be nice is if we could define operations like addition for our type, and then we could do things like

(Python)

point3 = point1 + point2

A Good Solution: Operators are Functions

A good solution is to make built-in operators correspond to functions/methods of a certain name, and to make keywords aliases to standard functions/methods.

Some languages, like Python, Rust, and Haskell, let you overload operators. All you have to do is define a function for your class, and whenever you use a given operator (like "+"), it just calls the function you've defined. In Python, the "+" operator corresponds to __add__(). In Rust, "+" is defined in the Add trait, by the add() function. In Haskell, "+" corresponds to the (+) function in the Num typeclass.

Many languages have a way to extend various keywords, like the for-each loop. Haskell doesn't have loops, but in languages like Rust, Java, and Python, there is usually some concept of an "Iterator" that allows for the usage of for-each loops on any kind of collection data structure.

A potential downside of this is that someone could define operators to do something completely unintuitive. For example, a crazy coder might define the "-" operation on two vectors to be the dot product. But that problem isn't specific to operator overloading. No matter the programming language, people can always write poorly-named functions.

Go's solution: N/A

Go does not support operator overloading or keyword extensibility.

So what if we want to implement the range keyword for something else, like a tree or a linked list? Too bad. That is not part of the language. You can only range over builtins. Same thing with the make keyword. It can only be used to allocate space and initialize built-in data structures.

The closest thing to a flexible iterator keyword is building a wrapper around your data structure that returns a chan and then iterate over the chan, but that is slow, complicated, and bug-prone.

The justification for this is "It's very simple to understand, and the code that I read on the page is the code that is executed". That is, if Go allowed for extending things like range, it might cause confusion because the implementation details of this particular range mechanism might not be obvious. This argument makes little sense to me, because people are going to have to iterate over the data structure regardless of whether or not Go makes it easy and concise. Instead of hiding the implementation details inside a range() function, we have to hide the implementation details in some other helper function. Not a big improvement. All well-written code is easy to read, and most poorly-written code is hard to read. Obviously Go can't change that.

Base Cases and Failure Conditions

The Problem

When working with recursive data structures, like linked lists or trees, we want a way to indicate that we've reached the end of the data structure.

When working with functions that may fail, or data structures that may have pieces of data missing, we want a way to indicate that we've reached some kind of failure condition.

Go's Solution: Nil (and multiple return)

I'm going to discuss Go's solution first, because it gives context for the better solution discussed below.

Go has the null pointer (nil). I consider it a shame whenever a new language, tabula rasa, chooses to re-implement this unnecessary bug-inducing feature.

The null pointer has a rich and bug-riddled history. For historical and practical reasons, there is almost never useful data stored at the memory address 0x0, so pointers to 0x0 are generally used to represent some sort of special condition. For example, a function that returns a pointer might return 0x0 if it fails. Recursive data structures might use 0x0 to represent a base case (like the leaf of a tree, or the end of a linked list). This is what we use it for in Go.

However, using the null pointer in this way is unsafe. The null pointer is essentially a backdoor out of the type system; it allows you to create an instance of a type that isn't really of that type at all. It is an extremely common occurrence for a programmer to accidentally forget to account for the possibility that a pointer may be null, potentially leading to (at best) crashes and (at worst) exploitable vulnerabilities. The compiler can't easily protect against this, because the null pointer subverts the type system.

To Go's credit, it is idiomatically correct (and encouraged) to leverage Go's multiple return mechanism to return a second "failure" value if there is a possibility that a function will fail. However, this mechanism can easily be ignored or misused, and it generally isn't useful for representing base cases in recursive data structures.

A Good Solution: Algebraic Types and Type-safe Failure Modes

Instead of violating the type system to represent error conditions and base cases, we can use the type system to safely encapsulate these cases.

Let's say we want to build a linked list type. We want to represent two possible cases: either we're not at the end of the linked list, and this linked list element has some data attached to it, or we are at the end of the linked list. A type-safe way to do this is to have separate types representing both of these cases, and then combine them into a single type (using algebraic data types). Let's say we have a type called Cons, representing a linked list element with some data, and End, representing the end of a linked list. We can write those like this:

(Rust)

enum List<T> {
  Cons(T, Box<List<T>>),
  End
}
let my_list = Cons(1, box Cons(2, box Cons(3, box End)));

(Haskell)

data List t = End | Cons t (List t)
let my_list = Cons 1 (Cons 2 (Cons 3 End))

Each type specifies a base case (End) for any recursive algorithms operating on the data structure. Neither Rust nor Haskell allow null pointers, so we know with 100% certainty that we will never experience a null pointer dereference bug (unless we do some really crazy low-level stuff).

These algebraic data types also allow for incredibly concise (and therefore evidently bug-free) code, through techniques like pattern matching (described later).

Now, what should we do if a function may or may not return something of a certain type, or a data structure may or may not contain a piece of data? That is, how can we encapsulate a failure condition in our type system? To solve this, Rust has something called Option, and Haskell has something called Maybe.

Imagine a function that searches an array of non-empty strings for a string starting with 'H', returns the first such string if it exists, and returns some kind of failure condition if it doesn't exist. In Go, we might return nil on failure. Here's how to do this safely, with no dangerous pointers, in Rust and Haskell.

(Rust)

fn search<'a>(strings: &'a[String]) -> Option<&'a str>{
  for string in strings.iter() {
    if string.as_slice()[0] == 'H' as u8 {
      return Some(string.as_slice());
    }
  }
  None
}

(Haskell)

search [] = Nothing
search (x:xs) = if (head x) == 'H' then Just x else search xs

Instead of returning a string or a null pointer, we return an object that may or may not contain a string. We never return a null pointer, and programmers using search() know that it may or may not succeed (because its type says so), and they must prepare for both cases. Goodbye, null dereference bugs.

Type Inference

The Problem

Sometimes it gets a little old specifying the type of every single value in our program. There are some cases where the value of a type should be obvious, such as

int x = 5
y = x*2

Clearly y should also be of type int. There are more complex situations where this is true as well, for example deducing the return type of a function based on the types of its arguments (or vice versa).

A Good Solution: General Type Inference

Because Rust and Haskell are both based on the Hindley-Milner type system, they are both very good at type inference, and you can do cool things like this:

(Haskell)

map :: (a -> b) -> [a] -> [b]
let doubleNums nums = map (*2) nums
doubleNums :: Num t => [t] -> [t]

Because the function (*2) takes a Num and returns a Num, Haskell can infer that the type of a and b is Num, and it can therefore infer that the function takes and returns a list of Num. This is much more powerful than the simple type inference supported by languages like Go and C++. It allows us to make a minimal number of explicit type declarations, and the compiler can correctly fill in the rest, even in very complex program structures.

Go's solution: :=

Go supports the := assignment operator, which works like this:

(Go)

foo := bar()

All this does is look at the return type of bar(), and set the type of foo to that. It's pretty much the same as this:

(C++)

auto foo = bar();

That's not really very interesting. All it does is shave off a few seconds of effort manually looking up the return type of bar(), and a few characters typing out the type in foo's declaration.

Immutability

The Problem

Immutability is the practice of setting values once, at the moment of their creation, and never changing them. The advantages of this are pretty clear; if values are constant, every single bug caused by mutating a data structure in one place while using it in another is alleviated.

Immutability is also helpful for certain kinds of optimization.

A Good Solution: Immutability By Default

Programmers should strive to use immutable data structures whenever possible. Immutability makes it much easier to reason about side effects and program safety, and strips away entire classes of bugs.

In Haskell, all values are immutable. If you want to modify a data structure, you have to create an entirely new data structure with the correct changes. This is still pretty fast because Haskell uses lazy evaluation and persistent data structures. Rust is a systems programming language, which means that it can't use lazy evaluation, which means that immutability isn't always as practical as it is in Haskell. Therefore, Rust makes variables immutable by default, but allows for mutability should it be needed. This is great, because it forces programmers to ask if they really need that variable to be mutable, which encourages good programming practice and allows for increased optimization by the compiler.

Go's solution: N/A

Go does not support immutability declarations.

Control Flow Structures

The Problem

Control flow structures are part of what separate programming languages from assembly. They let us use abstract patterns to direct a program's execution in a coherent way. Obviously all programming languages support some kind of control flow structures, or else we would not use them. However, there are a few nice control flow structures that I feel Go is missing.

A Good Solution: Pattern Matching and Compound Expressions

Pattern matching is a really cool way of working with data structures and values. It's kind of like a case/switch expression on steroids. You can pattern match values like this:

(Rust)

match x {
  0 | 1 => action_1(),
  2 .. 9 => action_2(),
  _ => action_3()
};

And you can deconstruct data structures like this:

(Rust)

deg_kelvin = match temperature {
  Celsius(t) => t + 273.15,
  Fahrenheit(t) => (t - 32)/1.8 + 273.15
};

The previous example showed something sometimes called "compound expressions". In languages like C and Go, if statements and case/switch statements just direct the flow of the program; they don't evaluate to a value. In languages like Rust and Haskell, if statements and pattern matches actually can evaluate to a value, and this value can be assigned to things. In other words, things like if/else statements can actually "return" a value. Here's an example with an if statement:

(Haskell)

x = if (y == "foo") then 1 else 2

Go's Solution: C-style Valueless Statements

I don't want to short-change Go here; it does have some nice control flow primitives for certain things, like select for parallelism. However, it doesn't have compound expressions or pattern matching, which I'm a big fan of. Go only supports assignment from atomic expressions, like x := 5 or x := foo().

Embedded Programming

Writing programs for embedded systems is very different from writing programs for computers with full-featured operating systems. Some languages are much better suited to deal with the unique requirements of embedded programming.

I am confused by the seemingly non-trivial number of people advocating Go for programming robots. Go is not suitable for programming embedded systems, for a number of reasons. This section is not a criticism of Go. Go was not designed for embedded systems. This section is a criticism of people recommending Go for embedded programming.

Sub-Problem #1: The Heap and Dynamic Allocation

The heap is a region of memory that can be used to store an arbitrary number of objects created at runtime. We call usage of the heap "Dynamic Allocation".

It is generally unwise to use heap memory in embedded systems. The most practical reason for this is that the heap generally has a fair amount of memory overhead, and requires somewhat complex data structures to manage, neither of which are good when you're programming on an 8MHz microcontroller with 2KB of RAM.

It's also unwise to use the heap in real-time systems (systems that might fail if an operation takes too long), because the amount of time it takes to allocate and free memory on the heap is highly non-deterministic. For example, if your microcontroller is controlling a rocket engine, it would suck if you tried to allocate some space on the heap and it happened to take a few hundred milliseconds longer than usual, leading to incorrect valve timing and a massive explosion.

There are other aspects of dynamic allocation that do not lend themselves to effective embedded programming. For example, many languages that use the heap rely on garbage collection, which usually involves pausing the program for a bit and looking through the heap for things that aren't being used anymore, and then deleting them. This tends to be even less deterministic than simple heap allocation.

A Good Solution: Making Dynamic Allocation Optional

Rust has a number of standard library features that rely on the heap, like boxes. However, Rust has compiler directives to completely disable any heap-using language features, and statically verify that none of these features are being used. It is entirely practical to write a Rust program with no heap usage.

Go's solution: N/A

Go relies heavily on the usage of dynamic allocation. There is no practical way to constrain Go code to use only stack memory. This is not a problem with Go. This is perfectly fine within Go's intended area of usage.

Go is also not a real-time language. It is generally impossible to make strict guarantees about the execution time of any reasonably complex Go program. This may be a bit confusing, so let me be clear; Go is relatively fast, but it is not real-time. Those two are very different. Fast is nice for embedded programming, but the really important thing is that we need to be able to make guarantees about that maximum possible time something might take, which is not easily predicted when using Go. This is mostly due to Go's heavy usage of heap memory and garbage collection.

The same problems apply to languages like Haskell. Haskell is not suited for real-time or embedded programming because it has similarly heavy heap usage. However, I've never seen anyone advocating Haskell for robotics, so I've never needed to point this out.

Sub-Problem #2: Unsafe Low-Level Code

When it comes to writing embedded programs, it is pretty much inevitable that you will eventually have to write some unsafe code (that does unsafe casts, or pointer arithmetic). In C or C++, doing unsafe things is really easy. Let's say I need to turn on an LED by writing 0xFF to the memory address 0x1234. I can just do this:

(C/C++)

*(uint8_t*)0x1234 = 0xFF;

This is exceptionally dangerous, and only makes any sense whatsoever in very low-level systems programming. This is why neither Go nor Haskell have any easy way to do this; they are not systems languages.

A Good Solution: Unsafe Code Isolation

Rust, with its focus on both safety and systems programming, has a good way of doing this: unsafe code blocks. Unsafe code blocks are a way of explicitly isolating dangerous code from safe code. Here's how we write 0xFF to the memory address 0x1234 in Rust:

(Rust)

unsafe{
  *(0x1234 as *mut u8) = 0xFF;
}

If we tried to do that out of an unsafe code block, the compiler would complain loudly. This allows us to do the unfortunate but necessary dangerous operations inherent to embedded programming, while maintaining code safety as much as possible.

Go's solution: N/A

Go is not intended to do this sort of thing, and thus has no built-in support for it.

TL;DR

Now you may say "But why is Go not good? This is just a list of complaints; you can complain about any language!". This is true; no language is perfect. However, I hope my complaints reveal a little bit about how
· Go doesn't really do anything new.
· Go isn't well-designed from the ground up.
· Go is a regression from other modern programming languages.