Nottingham FP Lab Blog

Welcome to the blog of the Functional Programming Lab at the University of Nottingham!

To post something, read the instructions at http://bitbucket.org/fplab/hblog.

Tags

FRP Game programming General Kinect Lunches Puzzles Type theory meetings Yampa

Functional Reactive Games and Kinect

by Ivan Perez on November 28, 2014

A new version of the Haskanoid game has been published. This version features Kinect support. The image processing module is about 80 locs long, and connecting it to the rest of the game only requires modifying one module (the main input subsystem).

This new version also fixes two issues, opened by online collaborators, related to asset locations after installation and multiplatform hardware support (wiimote and kinect can now be disabled when not available, using compilation flags).

Haskanoid is a Haskell game that features some of the complex elements found in arcade games: SDL graphics and sound, Wiimote and Kinect controls, and differentiated subsystems for rendering and sound, game input, physics/collisions, game logic, etc. The game has several levels, and more can be added easily.

The code has substantial haddock documentation. You are encouraged to go through the program, point out any issues/questions you may have, suggest improvements and send pull requests.

The game is written in Haskell, using the new Yampa 0.9.6, hcwiid for wiimote access, freenect for Kinect access and SDL 1.2 for multimedia.

A version of this Haskell game (on steroids) is being developed using SDL2. You can read the announcement and watch a video of the Haskell code running on a Android tablet here. Keera Studios has more Haskell games running on Android, and being beta tested via Google Play.

See full post with comments

Declarative Game Programming (PPDP 14) - additional material

by Ivan Perez on September 23, 2014

Earlier this month, Henrik Nilsson presented a tutorial on Declarative Game Programming at PPDP 14.

The goal of the tutorial was to show how real game programming is possible in a purely functional, declarative way. The approach used in Functional Reactive Programming, defining games as networks of interconnected signals and signal transformers, results in clear, reusable, modular code.

To illustrate this idea, we created a Haskell game that features some of the complex elements found in arcade games: SDL graphics and sound, Wiimote controls, and differentiated subsystems for rendering and sound, game input, physics/collisions, game logic, etc. The game has several levels, and more can be added easily.

All the material of this talk is now available, including the slides and the complete game code:

The code has substantial haddock documentation. You are encouraged to go through the program, point out any issues/questions you may have, suggest improvements and send pull requests.

The game is written in Haskell, using the new Yampa 0.9.6, hcwiid for wiimote access, and SDL 1.2 for multimedia.

A version of this Haskell game (on steroids) is being developed using SDL2. You can read the announcement and watch a video of the Haskell code running on a Android tablet here.

See full post with comments

Yampa 0.9.6 released

by Ivan Perez on August 29, 2014

A new version of the Arrowized Functional Reactive Programming (FRP) library Yampa has been released.

This new version includes a lot of haddock documentation, finally addressing the old issue of having to browse a large collection of papers and the code to find information about the API.

There are no changes to the API itself, so this version remains backwards compatible. Future releases will seek to improve this documentation and introduce other optimisations as mandated by users' needs.

Additionally, Henrik Nilsson will give a tutorial on Declarative Game Programming at PPDP 2014, which will take place in Canterbury (UK) on September 8-10 (http://users-cs.au.dk/danvy/ppdp14/).

Links and more:

See full post with comments

Categorical Semantics of Dependent Type Theory

by nicolai on February 7, 2013

In the Type Theory meeting last week, I have given an overview over the categorical semantics of dependent type theory. The semantics of the simply typed lambda calculus are well-known and generally accepted, but this is not so much true for the dependently typed setting. Also, it is not so easy to state the conditions on a category \(\mathbb C\) to be suitable for the construction of a model. Often, people just say that \(\mathbb C\) should be locally cartesian closed. This makes a lot of sense, but is neither really sufficient, nor necessary. However, for the moment, it is a good description for \(\mathbb C\).

We interpret every (valid) context as an object in \(\mathbb C\). For the simply typed lambda calculus, it is not too important to distinguish between contexts and types. This is more subtle for dependent types, as not every type is closed. We have typing judgments \[ \Gamma \vdash A \textit{ : type}, \] and from such a judgment, we get a new valid context, often written \(\Gamma.A\), the context extension. We interprete this judgment as a morphism between these contexts: \[[[ \Gamma \vdash A \textit{ : type} ]] \; : \; [[ \Gamma.A ]] \to [[ \Gamma ]]. \] One should think of this map as a projection, that "projects A away". Being lazy, one also writes \([[A]]\) instead of the rather long expression above. In the simply typed setting, one usually models types as objects. Equivalently, one could model a type as the unique morphism from the object to the terminal object.

If we have the type judgment above, we can also have a term of this type, \[ \Gamma \vdash t \, : \, A .\] We model this judgment as a section of the type judgment, i.e. it is a morphism from \([[\Gamma ]]\) to \([[\Gamma.A]]\) such that \[ [[ \Gamma \vdash A \textit{ : type} ]] \; \circ \; [[ \Gamma \vdash t \, : \, A]] \; = \; \operatorname{id}_{[[\Gamma]]}, \] or just \([[A]] \circ [[t]]\) in the lazy version.

The nice thing is that we can do many things in the same way as it works for the simply typed case; we just need to restrict ourselves to the slice category over \([[\Gamma]]\), if we work in context \(\Gamma\). For example, for the simply typed lambda calculus, we use exponentials to model function types. Note that the functor "\(A\to\)", or \(\_^A\), is right adjoint to the product functor \(\_ \times A\). The product with \([[A]]\) in the slice over \([[\Gamma]]\) is nothing else but the pullback along \([[A]]\), let us write \(A^*\), and we get dependent function types as the right adjoints of these pullbacks. Similarly, we get dependent sums as left adjoints of these pullbacks. Summarized, we have \[\Sigma_A \dashv A^* \dashv \Pi_A.\] One can convince oneself that this makes sense by drawing a couple of diagrams.

I think these semantics have first been described by Seely. “Locally Cartesian Closed Categories and Type Theory”, and have since then often been discussed; in particular, there is a coherence problem (things that hold strictly in type theory can, by default, only be described up to isomorphism in category theory). The (arguable) cleanest solution for this problem is a universe construction by Voevodsky. The plan is that more detailed semantics, together with this construction, the presheaf of simplicial sets, and univalence, will be the topics of the Type & Homotopy Theory meetings, as well as the Sheaf Theory seminars, in the next two weeks.

See full post with comments

ε0 happy returns

by Venanzio on December 15, 2012

Today is the 100th anniversary of the birth of Reuben Goodstein. It's a very young age for someone immortalized by his voyages into the transfinite realm. A few months ago I talked about his theorem during an FP lunch. After some wait, here is a blog post about it.

Cantor Normal Forms

Every natural number \(x\) can be put in Cantor normal form with respect with any base \(b\). First we write \(x\) in base \(b\), as a sum of powers of \(b\) multiplied by coefficients smaller than \(b\), for example, if \(x=68630377369258\) and \(b=3\) we have \[ 68630377369258 = 3^{29} + 3^7\cdot 2 + 1 \] We recursively write the exponents in the same base: \[ 29 = 3^3+2 = 3^{3^1}+2, \quad 7 = 3\cdot 2 +1 = 3^1\cdot 2+1 \] Thus the Cantor normal form of 68630377369258 in base 3 is: \[ 68630377369258 = 3^{3^{3^1}+2} + 3^{3^1\cdot 2+1}\cdot 2 + 1 \] So a Cantor normal form in base b expresses a number x as \(x = b^{c_0} \cdot a_0 + b^{c_1} \cdot a_1 + \cdots + b^{c_n} \cdot a_n\) where \(c_0\), ..., \(c_n\) are themselves Cantor normal forms.

Abstractly, generalizing over the base, it's a list of pairs of coefficients and exponents, which are themselves Cantor normal forms. Here it is expressed as a Haskell data type with functions converting between it and naturals with respect to a given base.

data Cantor = Cantor [(Integer, Cantor)]
  deriving Show 

-- (cantNum c b) computes the number with CNF c in base b
cantNum :: Cantor -> Integer -> Integer
cantNum (Cantor l) = cantNumL l

cantNumL :: [(Integer,Cantor)] -> Integer -> Integer
cantNumL [] _ = 0
cantNumL ((i,c):cs) base = i*base^(cantNum c base) + cantNumL cs base

-- (numCant b x) computes the CNF of x in base b
numCant :: Integer -> Integer -> Cantor
numCant base = Cantor . map (\(c,n) -> (c,numCant base n)) . (inBase base)

iLog :: Integer -> Integer -> Integer
iLog base n = iLogAux 0 1
  where iLogAux i k = if (k*base)>n then i else iLogAux (i+1) (k*base)

inBase :: Integer -> Integer -> [(Integer,Integer)]
inBase base 0 = []
inBase base n = (n `div` k,i) : inBase base (n `rem` k)
  where i = iLog base n
        k = base^i

Now take a Cantor normal form (CNF), change the base and recompute the number. For example, earlier we had the CNF of \(68630377369258\) in base 3. If we change the base to 4 we obtain \[ \begin{array}{l} 3^{3^{3^1}+2} + 3^{3^1\cdot 2+1}\cdot 2 + 1 \leadsto 4^{4^{4^1}+2} + 4^{4^1\cdot 2+1}\cdot 2 + 1\\ = 21452492687908155359318439997129353803966985312947829\\ \quad 40435769830995482244811767516288299887706704548430405\\ \quad 09730983776813660062124991145119142938384097869825. \end{array} \] The number has become much bigger, so let's make it smaller by subtracting one! \[ \begin{array}{l} 21452492687908155359318439997129353803966985312947829\\ 40435769830995482244811767516288299887706704548430405\\ 09730983776813660062124991145119142938384097869825 - 1 \\ = 21452492687908155359318439997129353803966985312947829 \\ \quad 40435769830995482244811767516288299887706704548430405 \\ \quad 09730983776813660062124991145119142938384097869824. \end{array} \] This is one step of the Goodstein procedure: find the Cantor normal form of a number \(x\) with respect to a base \(b_1\), change the base to \(b_2\), recompute the number and subtract one. The Goodstein sequence associated with a number is obtained by iterating the step increasing the base by one every time.

-- (goodstep b1 b2 n) first finds the CNF of n in base b1
-- then recomputes it in base b2 and subtracts 1
goodstep :: Integer -> Integer -> Integer -> Integer
goodstep base1 base2 n = cantNum (numCant base1 n) base2 - 1

-- Given a list of bases and a number n
-- we iterate the Goodstein step on n using all the bases
goodsteinGen :: [Integer] -> Integer -> [Integer]
goodsteinGen _ 0 = []
goodsteinGen (b1:b2:bs) n = n : goodsteinGen (b2:bs) (goodstep b1 b2 n)

goodstein :: Integer -> [Integer]
goodstein = goodsteinGen [2..]

Let's compute a few:

*Main> goodstein 1
[1]
*Main> goodstein 2
[2,2,1]
*Main> goodstein 3
[3,3,3,2,1]

Now do what I did: run it on 4 on your university's server and enjoy the screams of frustration of your colleagues when everything hangs up.

Although computing the Goodstein sequence for 4 is not feasible in practice, one can prove that it terminates.

Goodstein's Theorem

The Goodstein procedure terminates on every starting number.

Although the theorem has a simple formulation and a short elegant proof, it cannot be proved in Peano Arithmetic: it essentially needs transfinite induction. It is probably the simplest result that is undecidable in PA.

The proof consists in associating an ordinal to every CNF, proving that every step in the process reduces this ordinal and therefore concluding that the process terminates by well-foundedness of the ordinal numbers.

Given a CNF, just compute it using the ordinal \(\omega\) (the ordinal associated to the natural numbers) as a base. The example that we've seen earlier gives: \[ \omega^{\omega^{\omega^1}+2} + \omega^{\omega^1\cdot 2+1}\cdot 2 + 1 \] (Addition and multiplication of transfinite ordinals are not commutative; to get the correct result the coefficients must be multiplied on the right and the terms must be in decreasing order.)

Now it is very easy to see that every step reduces the ordinal. If the CNF contained a non-zero term with the base raised to the power zero (that is, if the number is not divisible by the base) then the associated ordinal is a successor an the step will just give its predecessor. Otherwise there is a bigger decrease in the ordinal (depending on the base used).

In any case, we get a decreasing sequence of ordinals that will eventually reach zero. This proves the theorem.

Since every tower of exponentiation of the base is a valid CNF, we get that every ordinal smaller that \(\epsilon_0\) can occur: \[ \epsilon_0 = \sup \{\omega, \omega^\omega, \omega^{\omega^\omega}, \ldots\}. \] So we are doing \(\epsilon_0\)-induction, which goes beyond elementary arithmetic.

See full post with comments

A Quotient in HoTT

by Nicolai Kraus on December 12, 2012

Last Friday (07/12/12), I have talked about the difficulties of quotients in Homotopy Type Theory.

Traditionally, the quotient \(A/\sim\) of a set \(A\) by some equivalence relation \(\sim : A \to A \to \textbf{Prop}\) is a set \(Q\) (or, by abuse of notation, just \(A/\sim\) , think of the set of equivalence classes), together with a function \([\_] : A \to Q\) (mapping every element on its equivalence class) and an eliminator. The latter says, more or less, whenever a function \(f : A \to C\) does not distinguish between elements that are \(\sim\)-equivalent, it can be lifted to a function \(\overline f : Q \to C\). For the details, I would recommend Definable Quotients in Type Theory by Nuo, Thomas and Thorsten.

This is a very natural concept and, in some cases, it is necessary. For example, the real numbers cannot be defined without quotients (this statement refers only to Type Theory, but even in classical mathematics, notions as the Cauchy Reals make usage of quotients). However, the concept becomes much more involved as soon as the principle UIP (uniqueness of identity proofs) is dropped. It would make sense to generalise the notion of an equivalence relation, but even if we don't, it is not clear at all what it means for a function to "not distinguish between elements that are equivalent". Without UIP, the actual proofs and their behaviours matter, while we did not have to care about that issue before.

I talked about the example of the possibly simplest equivalence relation, which is constantly \(\textbf{True}\). In that case, the quotient should be the h-propositional reflection (also known as "squash-type" or "bracket-type"), but it is already very hard to formalise the conditions under which a function \(f : A \to C\) can be lifted to \(Q \to C\). Informally, for any finite number of points in \(A\), we can, by induction on the number of points, define the corresponding simplex in \(Q\), and the condition on \(f\) is that these simplices can be built out of coherence proofs. This leads to one of the standard constructions when it comes to \(\omega\)-category theory, and possibly to concepts that are well-known in Homology. Of course, the first goal is the formalisation of the informally described idea.

See full post with comments