Tuesday, July 17, 2012

Another Logic Programming Reading List

Inspired by David Nolen's recent post, A Logic Programming Reading List, I compiled my own list. Of course my list is biased towards computational linguistics, but I included a book on databases. The titles are freely available on the Internet!

A very accessible introduction to Prolog, written for students of computational linguistics but it can be used by everyone interested in logic programming. The free version contains enough material to get started, so you don't have to buy the paper book.

I loved this book because it contains materials on Herbrand models, resolution, soundness and completeness, SLD-tree forms and cut, yet it is very accessible. Don't be afraid of those big words, it contains examples and its main topic is reasoning about structured knowledge.

If you want to dive deep into Prolog, you can't avoid the Warren Abstract Machine, as it is the target of Prolog compilers. This free ebook will help you to understand efficiency issues in logic programming.

A Prolog based introduction into the basic algorithms and methods of computational linguistics. It is "old-school", so it is dealing with rule-based techniques. At least, you should have a look at on the section on finite state automate, finite state parsers and regular languages.

Learn about semantics, lambda calculus and discourse representation theory.

Datalog is back! It is deeply rooted in logic programming, and this book puts it into context.

Fernando C.N. Pereira - Stuart M. Shieber: Prolog and Natural Language Analysis
This classic presupposes some knowledge of logic, formal language theory, and linguistics. If you are a linguist, this means Partee et al.

Saturday, February 25, 2012

lx in core.logic #3: Finite State Transducers

This is the third post in the series on using core.logic to implement basic constructs in computational linguistics. If you haven't already, you might want to have a look at the first two before you start:

Today, we're gonna look at finite state transducers, which are commonly used to model and implement translation. While sounding fancy and powerful, they are straightforward extensions of finite automata.

(ns fst
(:refer-clojure :exclude [==])
(:use [clojure.core.logic]))
;; A finite state transducer is essentially a translator between
;; two tapes of symbols. It is normally a translator from an input
;; tape to an output tape, but since we are using core.logic,
;; we hope to relax this restriction :).
;; The main idea is that every transition accepts two symbols
;; (one from each tape). We will implement a simple pluralizer
;; for most English words.
;; WARNING: Maths ahead, skip at your leisure
;;
;; Formally, a finite state transducer is a tuple
;; T = (Q, Sigma, Gamma, I, F, delta)
;;
;; where
;; Q is the state space | Q = {0, std, es, s, 1}
;; Sigma is the input alphabet | Sigma = {a-z} and :jump
;; Gamma is the output alphabet | Gamma = {a-z} and :jump
;; I are the starting states | {0}
;; F are the accepting states | {1}
;; delta is the transition function
;; delta(q, a, b, qto) signifies that it is possible
;; to transition from state q to state qto by consuming
;; a from the first tape and b from the second
;; We will implement the following rules:
;; -h, -s, -o |--> -(h|s|o)es
;; -y |--> -ies
;; - |--> -s
;; Our transducer looks as follows:
;;
;; +---(any <x> not in {s,h,o,y}):<x>----(std)
;; | |
;; | #:s
;; | s:s |
;; | h:h v
;; +------> (0) --o:o--> (es) --#:e--> (s) --#:s--> (1)
;; | | y:i
;; +-<x>:<x>-+
;;
;; Notation
;;
;; <a>:<b> - <a> consumed from first tape and <b> from second
;; # - jump
;; (<x>) - state
(defrel start q)
(fact start 0)
(defrel accepting q)
(fact accepting 1)
;; Transition table. Note we haven't included
;; the transitions that accept many symbols, since
;; we do not want to enumerate the input alphabets.
(defrel transition* from a b to)
(facts transition* [[0 'y 'i 'es]
[0 's 's 'es]
[0 'h 'h 'es]
[0 'o 'o 'es]
['es :jump 'e 's]
['s :jump 's 1]
['std :jump 's 1]])
;; Dynamic extension of the transition table
;; to our full transition relation. This includes
;; the transitions from 0 to 0 and 0 to std.
(defn transition [from a b to]
(conde
((transition* from a b to))
((!= a :jump)
(== a b)
(== from 0)
(conde ((== to 0))
((== to 'std)
(!= a 's)
(!= a 'h)
(!= a 'o)
(!= a 'y))))))
;; Translation *relation*
(defn translate
([tape1 tape2]
(fresh [q0]
(start q0)
(translate q0 tape1 tape2)))
([q tape1 tape2]
(matcha [tape1 tape2]
(['() '()]
(accepting q))
([[t . ape1] [c . ape2]]
; This seems unnecessary going forward, but it makes
; sure we get no jumps in the input if we
; "translate" backwards, i.e., look for the singular
(!= t :jump)
(fresh [qto]
(transition q t c qto)
(translate qto ape1 ape2)))
([_ [t . ape2]]
(fresh [qto]
(transition q :jump t qto)
(translate qto tape1 ape2))))))
(run* [q] (translate '(p a s s) '(p a s s e s)))
;; => (_.0)
;;
;; Checking pluralizations
(run* [q] (translate '(p i r a t e) q))
;; => ((p i r a t e s))
;;
;; Running the pluralizer
(run* [q] (translate '(d a i s y) q))
;; => ((d a i s i e s))
(run* [q] (translate '(h e r o) q))
;; => ((h e r o e s))
(run* [q] (translate q '(v a r s)))
;; => ((v a r))
;;
;; Getting the singular
(run* [q] (translate q '(p a s s e s)))
;; => ((p a s s) (p a s s e))
;;
;; Unfortunately, the transducer doesn't really know English,
;; but at least it got the right answer
(run 10 [q] (fresh [a b] (translate a b) (== q [a b])))
;; => ([(h) (h e s)]
;; [(o) (o e s)]
;; [(s) (s e s)]
;; [(_.0 h) (_.0 h e s)]
;; [(_.0) (_.0 s)]
;; [(y) (i e s)]
;; [(_.0 o) (_.0 o e s)]
;; [(_.0 s) (_.0 s e s)]
;; [(_.0 _.1 h) (_.0 _.1 h e s)]
;; [(_.0 _.1) (_.0 _.1 s)])
view raw fst.clj hosted with ❤ by GitHub

Sunday, January 29, 2012

Counting words

Zipf's law is a well-know word frequency distribution. Let's assume you are learning a foreign language and your teacher gives you books to read. You have to take exams that test if you acquired the vocabulary of the books. You have other commitments, and you prefer reading blogs and books on computational linguistics, so you'd like to determine the most frequent words of the texts and learn them by rote memorization right before the exam. You know that the higher the frequency of a word, the higher the probability it will be on the test. At first, it seems to be obvious that we have to count how many times each word occurs in a text, but it will get a bit complicated.
(ns hello-nlp.core
(use [clojure.string :as str :only [split-lines lower-case]] :reload)
(use opennlp.nlp)
(use opennlp.tools.filters)
(use (incanter core charts)))
view raw zipf01.clj hosted with ❤ by GitHub
We need a text file, I'm using Austen's Persuasion from the NLTK corpora.
(def austen
(slurp "/path/to/your/corpora/austen-persuasion.txt"))
view raw zipf02.clj hosted with ❤ by GitHub

Warning! slurp reads the whole file into the memory! Counting the words is pretty straightforward.
(defn plus-map [map key]
(if (nil? (map key))
(assoc map key 1)
(assoc map key (+ (map key) 1))))
(defn plus-list-map [mymap keylist]
(if (empty? keylist)
mymap
(recur (plus-map mymap (first keylist)) (rest keylist))))
(defn sortmap [mymap]
(let [mykeys (keys mymap)
keyorder (sort-by #(mymap %1) > mykeys)
keymap (map (fn [key]
[key (mymap key)]) keyorder)]
keymap))
(defn count-words [text]
(let [counter {}
one (plus-list-map counter (tokenize text))]
one))
(defn graph-words [text]
(let [raw (sortmap (count-words text))
words (map first raw)
numbers (map second raw)]
(view (bar-chart words numbers
:x-label "Words"
:y-label "Frequency"
:title "Zipf"))))
view raw zipf03.clj hosted with ❤ by GitHub
Plot the text with (graph-words austen) (or your text) and you will see something like this.

Not an informative picture! Let's analyse our text before we modify our program. The raw text file contains a lots of "noise". E.g. it is full of punctuation marks, our program is case sensitive and etc. Another problem lies in the nature of language.
(def get-sentences
(make-sentence-detector "models/en-sent.bin"))
(def tokenize
(make-tokenizer "models/en-token.bin"))
(def pos-tag
(make-pos-tagger "models/en-pos-maxent.bin"))
(defn tag-sent [sent]
(pos-tag (tokenize sent)))
(def pos-austen
(map pos-tag (map tokenize (get-sentences austen))))
(pos-filter determiners #"^DT")
(pos-filter prepositions #"^IN")
(def preps
(reduce + (map count (map prepositions pos-austen))))
(def dets
(reduce + (map count (map determiners pos-austen))))
(def nps
(reduce + (map count (map nouns pos-austen))))
(def vps
(reduce + (map count (map verbs pos-austen))))
(def stats
[nps vps dets preps])
(view (bar-chart ["np" "vp" "dts" "preps" ] stats))
view raw zipf04.clj hosted with ❤ by GitHub

Function words like determiners and prepositions are high frequency words. We are interested in the so called content words like nouns and verbs.

Part of speech tagging consumes your resources, so instead of removing function words identified by their pos tag, we are going to use a stopword list, and a list of punctuation marks. I used the NLTK English stopword list and made my own list of punctuation marks.
(def stop-words
(set (split-lines (slurp "/home/zoli/Projects/cllx/hello-nlp/corpora/stopwords/english"))))
(def puntctuation-marks
#{"+" "-" "*" "^" "." ";" "%" "\\" "," "..." "!" "?" ":" "\""})
view raw zipf05.clj hosted with ❤ by GitHub
The stop lists are stored in sets because we can filter the complement of a set (in Clojure, filter gives you the elements, doesn't remove them). It is a common practice to remove hapax legomena from the distribution and to use logarithmic scales on the axes of the chart.
(defn filter-hapax [lst]
(filter #(> (second %) 1) lst))
(defn graph-text [text]
(let [filtered-text (filter (complement puntctuation-marks) (filter (complement stop-words) (tokenize (lower-case text))))
raw (filter-hapax (sortmap (plus-list-map {} filtered-text)))
words (log10 (range 0 (count (map first raw))))
numbers (log10 (map second raw))]
(view (bar-chart words numbers
:x-label "Words"
:y-label "Frequency"
:title "Zipf"
))))
view raw zipf06.clj hosted with ❤ by GitHub

Now we've got a nicer chart.

The chart shows you that you can get a decent score if you concentrate on the most frequent words.

Friday, January 27, 2012

lx in core.logic #2: Jumps, Flexible Transitions and Parsing

This is a continuation of the post Finite State Machines in Clojure core.logic.

This current plan for this series is to follow the book Algorithms for Computational Linguistics using Clojure core.logic instead of Prolog.

Jumps, wildcard transitions and parsing are natural and useful ways to extend and leverage finite state machines for text analysis. This was an opportunity to introduce extensions of fact databases and non-deterministic matching. Here's the code:

(ns fsmparse
(:refer-clojure :exclude [==])
(:use [clojure.core.logic]))
;; We will encode a state machine that accepts lists containing '(w h y) as a sublist
;; Moreover, instead of a recognizer, we will implement a parser, that returns a list
;; of visited states in order
;;
;; +----#-----+----#-----+ +--?--+
;; v | | v |
;; +> (x) --w--> (w) --h--> (wh) --y--> (why) --+
;; | |
;; +-?-+
;;
;; Notation:
;; ? - any character
;; # - jump; does not consume a character from the input
;; (<x>) - state named <x>
;; --<i>--> - transition with input <i>
(defrel start q)
(fact start 'x)
;; Encoded transitions including jumps
(defrel transition* from via to)
(facts transition* [['x 'w 'w]
['w 'h 'wh]
['wh 'y 'why]
['w :jump 'x]
['wh :jump 'x]])
;; An extension of the transition* relation to implement start state and final
;; state transitions that accept any character
(defn transition [from via to]
(conde
((transition* from via to))
((!= via :jump) (== from 'x) (== to 'x))
((!= via :jump) (== from 'why) (== to 'why))))
(defrel accepting q)
(fact accepting 'why)
(defn parse
([input parsed]
(fresh [q0]
(start q0)
(parse q0 input parsed)))
([q input parsed]
; Non-relational matching, commits to first match
(matcha [input]
(['()]
(accepting q)
(== parsed (list q)))
([[i . nput]]
; Handling transitions that consume input characters
(!= i :jump)
(fresh [qto subparsed]
(transition q i qto)
(parse qto nput subparsed)
; conso is a built in relation defined as
; conso(x, xs, [x . xs]) succeeds
(conso q subparsed parsed)))
([_]
; Handling jump transitions
(fresh [qto subparsed]
(transition q :jump qto)
(parse qto input subparsed)
(conso q subparsed parsed))))))
(run* [q] (parse '(a w h y i n s i d e) q))
;; => ((x x w wh why why why why why why why))
(run* [q] (parse '(n o w a y) q))
;; => ()
(run 3 [q] (fresh [m] (parse q m)))
;; => ((w h y) (_.0 w h y) (_.0 _.1 w h y))
view raw fsmparse.clj hosted with ❤ by GitHub

Wednesday, January 25, 2012

Finite State Machines in core.logic

This is an implementation of Finite State Machines in Clojure using core.logic. They are a good starting point for computational linguistics and illustrate several features of core.logic, such as various ways of defining new relations, pattern matching and also the invertibility of relations.

It is not an introduction to core.logic. To learn the basics, I would recommend the Logic Starter.

(ns fsm
(:refer-clojure :exclude [==])
(:use [clojure.core.logic]))
;; Encoding a Finite State Machine and recognizing strings in its language in Clojure core.logic
;; We will encode the following FSM:
;;
;; (ok) --+---b---> (fail)
;; ^ |
;; | |
;; +--a--+
;; Recall that formally a finite state machine is a tuple
;; A = (Q, Sigma, delta, q0, F)
;;
;; where
;; Q is the state space | Q = {ok, fail}
;; Sigma is the input alphabet | Sigma = {a, b}
;; delta is the transition function | delta(ok, a) = ok; delta(ok, b) = fail; delta(fail, _) = fail
;; q0 is the starting state | q0 = ok
;; F are the accepting states | F = {ok}
;; To translate this into core.logic, we need to define these variables as relations.
;; Relation for starting states
;; start(q0) = succeeds
(defrel start q)
(fact start 'ok)
;; Relation for transition states
;; delta(x, character) = y => transition(x, character, y) succeeds
(defrel transition from via to)
(facts transition [['ok 'a 'ok]
['ok 'b 'fail]
['fail 'a 'fail]
['fail 'b 'fail]])
;; Relation for accepting states
;; x in F => accepting(x) succeeds
(defrel accepting q)
(fact accepting 'ok)
;; Finally, we define a relation that succeeds whenever the input
;; is in the language defined by the FSM
(defn recognize
([input]
(fresh [q0] ; introduce a new variable q0
(start q0) ; assert that it must be the starting state
(recognize q0 input)))
([q input]
(matche [input] ; start pattern matching on the input
(['()]
(accepting q)) ; accept the empty string (epsilon) if we are in an accepting state
([[i . nput]]
(fresh [qto] ; introduce a new variable qto
(transition q i qto) ; assert it must be what we transition to from q with input symbol i
(recognize qto nput)))))) ; recognize the remainder
;; Running the relation:
(run* [q] (recognize '(a a a)))
;; => (_.0)
;;
;; Strings in the language are recognized.
(run* [q] (recognize '(a b a)))
;; => ()
;;
;; Strings outside the language are not.
;; Here's our free lunch, relational recognition gives us generation:
(run 3 [q] (recognize q))
;; => (() (a) (a a))
view raw fsm.clj hosted with ❤ by GitHub

Tuesday, January 24, 2012

Beginning with Clojure

I am, by heart, a linguist, not a computational linguist. I was trained in Edinburgh, which is theoretically heavy, although not in Chomskyan, traditional linguistics. What I learned of Python I essentially taught myself, and there's no limit to my ignorance with traditional programming languages. That doesn't mean I'm not willing to try something new - far from it.

So, here we go. Rather than sit and pretend I haven't been twiddling my thumbs or busy for the past few months, I'm going to come straight out and say that is exactly what has been happening. I like blogging as I go along, though.

Install Clojure from this site. And here is where I ran into my first problem. I downloaded Clojure 1.3.0, unzipped it into my 'code' folder, and then cd'd in there in the Terminal. (I run a Mac.) The site suggests running this:
java -cp clojure.jar clojure.main
Well, that didn't work. (Neither did posting code snippets on Blogger, it seems. update: nevermind.) Instead, I got this:

Exception in thread "main" java.lang.NoClassDefFoundError: clojure/main
Caused by: java.lang.ClassNotFoundException: clojure.main
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

This is because there's an issue where it is packaged, and you need to call it by the specific package name. Quick fix, and...:
java -cp clojure-1.3.0.jar clojure.main Clojure 1.3.0
We're properly off!

I've been messing around with clojure on and off for a while now, over here. I highly suggest the tutorial, it is great. (I also highly suggest checking out this post on why Clojure Con was great, but that's not really on topic.

Depending on your development style, you may also want

  • line editing and history at the REPL
  • a syntax-highlighting editor
  • package management
  • automated builds
  • a full IDE
  • a tutorial environment
The site suggests all of those. I'm not so sure. I generally do all of my code with the Terminal and with MacVim. I'll be relying heavily on Vim for Clojure.

That's enough for this post. I'll put more up tomorrow, I hope! I know I'm late, but I view this as a running project and not an end-based one. Again, I'm a linguist, not a coder. So this is a long process.

Friday, January 13, 2012

What makes Clojure different?

A friend of mine asked me why Clojure matters and what makes it special and why I think it is good for linguists. This post is the edited version of my answer to my dear friend. Since there are very good books on the market (my favourite is Clojure in Action) and the internet is full of good tutorials (4Clojure is esp. good if you like the learning by doing method) my goal is only to give you a rough picture of functional programming.

An example

We are going to solve a "toy" problem stolen from the first chapter of Peter Norvig's seminal Paradigms of Artificial Intelligence. The question is how do you extract first and last names from someone’s full name. Before you think this is too simple and it doesn't worth dealing with, consider names like Robert Downey Jr, Admiral Grace Hopper, and what about Staff Sergeant William "Wild Bill" Guarnere (a character for the Band of Brothers series). Machines should be programmed to solve these problems, and even humans could have problems with names. It took me years to figure out that Martin "Boban" Doktor (a well known Czech Olympic champion sprint canoer) is not a real doctor...
First, we need some data to test our assumptions.
(def names [["John" "Q" "Public"]
["Malcolm" "X"]
["Admiral" "Grace" "Hopper"]
["Spot"]
["Aristotle"]
["A" "A" "Milne"]
["Z" "Z" "Top"]
["Sir" "Larry" "Oliver"]
["Miss" "Scarlet"]
["Robert" "Downey" "Jr"]
["Gregory" "House" "MD"]])
view raw names01.clj hosted with ❤ by GitHub
The function 'def' associates the symbol 'names' with names (oh, a vector of vectors). A first name is usually just the first word in a name.
(defn first-name [name]
(first name))
view raw names02.clj hosted with ❤ by GitHub
And the last name is the last word in a name.
(defn last-name [name]
(last name))
view raw names03.clj hosted with ❤ by GitHub
Let's test our functions. Calling first-name and last-name on my name gives the right answers.
names> (first-name [ "Zoltán" "Varjú" ])
"Zoltán"
names> (last-name [ "Zoltán" "Varjú" ])
"Varjú"
view raw names04.clj hosted with ❤ by GitHub
We stored out test data in names, and now it's time to test our functions en mass. The higher order function map helps us in doing so. Map takes a function as its first argument and applies it to every member of its second argument.
names> (map first-name names)
("John" "Malcolm" "Admiral" "Spot" "Aristotle" "A" "Z" "Sir" "Miss" "Robert" "Gregory")
names> (map last-name names)
("Public" "X" "Hopper" "Spot" "Aristotle" "Milne" "Top" "Oliver" "Scarlet" "Jr" "MD")
view raw names05.clj hosted with ❤ by GitHub
Oooops, the program is having serious problems with "titles" or prefixes. Calling last-name on names gives interesting results too. Our program is not that bad, it captures the basic logic of identifying first and last names, but affixes cause problems. The first name should be the first word in a name if it is not a prefix. Let's store the affixes in vectors.
(def prefixes ["Mr" "Mrs" "Miss" "Sir" "Madam" "Dr" "Admiral" "Major" "General"])
(def suffixes ["MD" "Jr"])
view raw names06.clj hosted with ❤ by GitHub
We want to test if the first word of the full name is a member of the titles. We need a function that tests membership.
(defn member [x sq]
(if (seq sq)
(if (= x (first sq))
sq
(recur x (rest sq)))))
view raw names07.clj hosted with ❤ by GitHub
The function member is recursive. First, it test if its second argument is a sequence. The second if gives us a terminating condition, if x and the first element of the second argument are equivalent, it returns the whole second argument. Otherwise it tests the membership again on the rest of the sequence (i.e. everything but the first element of the original sequence). Now, we can redefine our first name function. If the first word of the full name is in the list of prefixes, call first-name on the rest of the full name, otherwise return the first word of the full name.
(defn first-name [name]
(if (member (first name) prefixes)
(first-name (rest name))
(first name)))
view raw names08.clj hosted with ❤ by GitHub
Testing our new function shows it works correctly.
names> (map first-name names)
("John" "Malcolm" "Grace" "Spot" "Aristotle" "A" "Z" "Larry" "Scarlet" "Robert" "Gregory")
view raw names09.clj hosted with ❤ by GitHub
We can redefine last-name similarly.
(defn last-name [name]
(if (member (last name) suffixes)
(last-name (butlast name))
(last name)))
view raw names10.clj hosted with ❤ by GitHub
Storing names in vectors of strings is very unnatural (at least for humans, I guess machines don't care about these issues). Wouldn’t it be nicer to type names like "Zoltán Varjú" instead of ["Zoltán" "Varjú"]?
First, we need new test data, which is a vector of strings.
(def names2 ["John Q Public"
"Malcolm X"
"Admiral Grace Hopper"
"Spot"
"Aristotle"
"A A Milne"
"Z Z Top"
"Sir Larry Oliver"
"Miss Scarlet"
"Robert Downey Jr"
"Gregory House MD"])
view raw names11.clj hosted with ❤ by GitHub
We want to use our first-name and last-name functions. Can we split a name into individual words? clojure.string provides us a split function (that's why we put (:use [clojure.string :as str :only [split]] :reload) into ns) which splits a string into a vector of strings at a given point. The space character delimits the parts of a name. Our source code looks like this now:
(ns names
(:use [clojure.string :as str :only [split]] :reload)) ; otherwise split messes up everything
(def names2 ["John Q Public"
"Malcolm X"
"Admiral Grace Hopper"
"Spot"
"Aristotle"
"A A Milne"
"Z Z Top"
"Sir Larry Oliver"
"Miss Scarlet"
"Robert Downey Jr"
"Gregory House MD"])
(defn member [x sq]
(if (seq sq)
(if (= x (first sq))
sq
(recur x (rest sq)))))
(defn last-name [name]
(if (member (last name) suffixes)
(last-name (butlast name))
(last name)))
(defn first-name [name]
(if (member (first name) prefixes)
(first-name (rest name))
(first name)))
view raw names12.clj hosted with ❤ by GitHub
Now we can test split from clojure.string.
names> (split "Zoltán Varjú" #" ")
["Zoltán" "Varjú"]
view raw names13.clj hosted with ❤ by GitHub
Let's define a split-name function just to save ourself from repetitive strain injury caused by excessive typing.
(defn split-name [name]
(split name #" "))
view raw names14.clj hosted with ❤ by GitHub
Finally, we test if our functions work on splitted names.
names> (map split-name names2)
(["John" "Q" "Public"] ["Malcolm" "X"] ["Admiral" "Grace" "Hopper"] ["Spot"] ["Aristotle"] ["A" "A" "Milne"] ["Z" "Z" "Top"] ["Sir" "Larry" "Oliver"] ["Miss" "Scarlet"] ["Robert" "Downey" "Jr"] ["Gregory" "House" "MD"])
names> (map first-name (map split-name names2))
("John" "Malcolm" "Grace" "Spot" "Aristotle" "A" "Z" "Larry" "Scarlet" "Robert" "Gregory")
view raw names15.clj hosted with ❤ by GitHub

Notes

I have to note, you can make the code more concise and idiomatic. I hope you can see 1) how can you solve a problem with functions and by combining them 2) you have a basic idea of what is recursion 3) how can you go from a basic problem to an acceptable solution.

What makes Clojure different?

Norvig lists eight features that make Lisp different:
  1. built-in support for lists
  2. automatic storage management
  3. dynamic typing
  4. first-class functions
  5. uniform syntax
  6. interactive environment
  7. extensibility
  8. history (see Paul Graham's essays, What Made Lisp Different and The Roots of Lisp)

Clojure is a Lisp on the JVM which makes it unique. The Java Virtual Machine makes it portable, reliable and secure, but there is a new JavaScript based version called ClojureScript. Slime is an excellent development environment, leiningen makes project automation easy. Java interoperability means Clojure has got a great collection of libraries for almost everything.
However Clojure is not for complete beginners. The Clojure community is very open and supportive, but asking the right question requires some sort of maturity. As this Reddit thread explains you shouldn't be a Java expert to pick up the language, even you can learn what you have to know on the go. But you should know at least one 'conventional' language like Python before you start learning Clojure. More propaganda in our Why Clojure lx? post.