Pure Danger Tech


navigation
home

Fibbing with Scala

31 Dec 2008

I found it was quite trivial to port my Erlang Fibonacci examples to Scala. The naive and tail-recursive friendly versions look pretty much the same: [source:scala]

def fib(n: Int): Int = {

n match {

case 0 => 0

case 1 => 1

case _ => fib(n-1) + fib(n-2)

}

}</p>

def fib2iter(iter: Int, result: Int, next: Int): Int = {

if(iter == 0) {

result

} else {

fib2iter(iter-1, next, result+next)

}

}

def fib2(n: Int): Int = {

fib2iter(n, 0, 1)

}

[/source]

Scala of course runs on the JVM, which does not support tail recursion. However, the Scala compiler recognizes and handles the tail recursion properly so that it still has the same iterative properties and can scale for larger values of N.

In playing with fib2 I found interestingly that because Scala maps Int to Java integers we overflow at N=47: [source:scala]

scala> fib2(46)

res16: Int = 1836311903

scala> fib2(47)

res17: Int = -1323752223

[/source]

The solution there is to switch to the Scala class BigInt (which is just a friendly wrapper around java.math.BigInteger): [source:scala]

def fib(n: Int): BigInt = {

n match {

case 0 => BigInt(0)

case 1 => BigInt(1)

case _ => fib(n-1) + fib(n-2)

}

}

def fib2iter(iter: Int, result: BigInt, next: BigInt): BigInt = {

if(iter == 0) {

result

} else {

fib2iter(iter-1, next, result+next)

}

}

def fib2(n: Int): BigInt = {

fib2iter(n, BigInt(0), BigInt(1))

}

[/source]

I hunted around for a built-in way to time something in Scala but didn’t really find anything other than the Benchmark trait which was the functionality I wanted but outside of a trait. I ultimately just cobbled my own together:

[source:scala]

def timeNanos(n:Int, f: => Unit): Long = {

var total:Long = 0

for(val i <- List.range(0, n)) yield { val startTime = System.nanoTime(); f; val stopTime = System.nanoTime(); System.gc(); total += stopTime - startTime } total / n } [/source] This function runs a closure f for n iterations, timing each iteration using the System.nanoTime() method, then averages the result. Not pretty and surely fraught with microbenchmarking bias, but it’ll serve for this blog. To actually run all of this: [source:scala] scala> :load fib.scala Loading fib.scala… fib: (Int)BigInt fib2iter: (Int,BigInt,BigInt)BigInt fib2: (Int)BigInt scala> :load timing.scala Loading timing.scala… timeNanos: (Int,=> Unit)Long scala> timeNanos(10,fib(8)) res0: Long = 255700 [/source] After playing around for a while and warming up the JVMs I started to see consistent times and compared them here vs my Erlang results from yesterday:

So, pretty similar results for both versions, as we’d expect. Scala seems to be a little slower on the smaller N but have a flatter curve as N gets bigger.