OOP: Objects Are Reflections

In the early days of computing, you would write a program to run a series of instructions that (for the most part) took one main input and produced one main output. If a program needed a second input during processing, it would have to stop and wait for the input until it could resume. In the SUPER early days, that was all because input was fed to a computer using punch cards! Now, things are a little more complex. Many personal computers include a camera and microphone, and run complex operating systems that can handle multiple tasks in parallel.

A fortran punch card

Can you imagine writing code using only these? You can see the FORTRAN instructions that are punched out at the top: “Z(1) = Y + W(1)”

As these inventions came around, the classic model of input/output (called the IPO – Input, Process, Output) had to be altered a little to allow more complex operations. As programs became less linear, programming started to drift away from a paradigm in which the programmer was essentially writing instructions to solve for x in an equation, and began taking on a more representative nature.

Enter Object Oriented Programming (OOP). When the desired task is more complex, it makes more sense to model the program after its real world representation. A basic object is pictured below.

An object, which holds a set of data relating to one concept.

An object, containing an integer, a floating point number, 6 characters, and a double precision floating point number.

If we wanted to write a program to model cars driving on a street, stopping at stoplights, we would define the objects carstoplight, and possibly also road. Each object is simply an area of computer memory with two important factors:

  1. A set of values which describe the object being represented (numbers, text, dates, colors, true/false etc.)
  2. A set of actions the object knows how to perform
    1. In the case of our car object, some possible actions might be start, changeGear, or turn.

When programmers started writing objects, we began modeling our computers after the things we saw in our day-to-day lives. Objects are mainly cool because they allow the programmer to worry less about the actual memory and individual instructions the computer is performing, and focus more on what the computer should do with all of the info about the defined objects! This means less code for the programmer, but can add some overhead because the computer is dereferencing (finding) all of the object’s data by its own means, so sometimes it has to ‘look up’ where it stored something previously.

But overall, it’s great! In programming, Objects are truly the building blocks of Digital Reflections! Defining them abstracts the world around us, distilling physical things into their conceptual counterparts. To me, there isn’t much cooler than that.

Just thought you would like to know.

Neural Networks: Created in our Image?

How cool are they? Well, they’re integral to this process: 

An example of how it works can be found here. I think you’ll be surprised how simple it is! Here’s the breakdown:

  1. Tell the computer to try something
  2. Compare the gathered results to the result of a known success
  3. Adjust the input
  4. Compare the new result to both the expected and the previous results
    1. If the result is better, try something more like this test
    2. If the result is worse, try something less like this test
  5. Repeat until what is being tried is very close to working results

Seems pretty straightforward, right? It’s just what a newborn would do when it tries to walk. When it sees an adult walking, it knows it should be able to move around, but the motor skills necessary aren’t there, so it does what it can and tries over and over. It might strike you as a very human concept. I think it should.

The name isn’t arbitrary.

What makes a network neural? Well, you have to take a look at a neuron to understand that!

A neuron, with dendrites, soma, axon, and terminal end

A neuron, with the axon shown

A neuron fires in reaction to some stimuli from the body’s many sensory receptors. It ‘lights up’ whenever some input from the body meets a certain case. We commonly refer to the connection between neurons (called synapses), because one neuron is not useful for too much, just like a bit in a computer. So the Neuron can share its state with other neurons via the Axon, and when lots of neurons connect, they can manage complex states and tasks.

Note: When we talk about a neural network in the brain, we mean a Biological Neural Network.

Now that you’ve got an idea of what that looks like, let’s see the digital one.

A neural network, showing the input, hidden, and output layers.

This kind of looks like a brain, right?

 

In the digital system, the inputs go in the left when they are gathered from some input module. (A sensor, keyboard input, a temperature readout, etc.) In the hidden layer, some calculation is done to check the ‘correctness’ of the input. The output layer receives the ‘correctness’, and adjusts the next inputs in the hopes of performing better next time.

Abstractly, the most striking similarity between these concepts is that they are both adaptive, and they are many-to-many relationships. That is, each node can have multiple inputs and/or multiple outputs.

Final Thoughts

Because we’re talking about computers, of course we have to consider the larger scale case. Given enough nodes in the network, and by extension more predictors of an outcome, can a computer learn to reason? Or can it only mimic the process? I’ll leave that for you to think about.

 

Are We Coded? DNA vs. Digital Data

Here we go…

We’ve discussed a lot of different similarities between the natural world and the computing, but today we’re going to hit a little closer to home. We’ll tackle the startling similarities between the code that runs your computer and the code that describes you. That’s right, we’re going to talk about DNA. There are some interesting similarities between DNA and code, but we’ll discuss some differences first before I get too deep into it.

The basics

DNA and code share one important key factor – They both encode information. Where code (usually) uses a binary sequence, genetic material codes using a series of nucleic acids, whose base components are called nucleotides.There are 4 possible nucleotides that can sit at each spot in the sequence- A, T, C, and G. A single strand of RNA (a DNA molecule split in half by a cell’s proteins) looks something like this:

Since there are 4 possible values for each spot in the sequence, the number of possibilities that can be represented by a string of nucleotides grows much faster compared to a binary value. While a binary string can represent 2^n unique states (where n is the number of digits), a DNA strand can represent 4^n unique states! That might not seem like the biggest difference, but it grows fast. 10 binary digits can be 1024 different values, but 10 base pairs can have 1,048,576 different arrangements! And the difference only grows as you add more digits. BUT THAT’S NOT EVEN ALL!

A full strand of DNA (as opposed to RNA) has sets of base pairs, where A always bonds to T, and C bonds to G. DNA can be read from either of its two sides, so in a sense it actually has 2 * 4^n states. Wouldn’t it be cool if code could be read backwards too?

How it’s Read

To read a string of binary digits, some form of sensor that detects the state of each transistor in memory (there are a number of kinds for different mediums) passes over each state in a sequential manner. Based on the data that’s read from the states, the sensor can move to different areas in memory to get specific data from somewhere else (a branch operation), or make decisions based off of the data it’s reading. (Is this value bigger or smaller than a constant? Than some other variable?)

DNA can be read by first splitting in half (becoming RNA), then the cell’s ribosomes pass over each codon. To copy the DNA, each half of the split is paired up with a sequence exactly opposite ( A <==> T, G <==> C) to reform the base pairs, resulting in two copies of the DNA. When the information needs to be expressed, the unused RNA can be snipped out and then processed by the cell, producing the desired traits. This is how cells become all of the different kinds – they locate the code that expresses their type, cut out all of the rest, and act on what’s left.

Flags

With both DNA and machine code, the fact of the matter is that not all of it counts all of the time. In a code file, a programmer often adds comments to make the code more human-readable so that other programmers may better understand it, and understand it faster. Here’s an example from some PHP I wrote:

     //Create image from data, grab its width and height
$srcimg = imagecreatefromstring($content);
$swidth = imagesx($srcimg);
$sheight = imagesy($srcimg);

//Create a destination for our thumbnail to be stored with appropriate
//scaling and width 100
$thumbWidth = 100;
$thumbHeight = (100/$swidth)*$sheight;
$thumbnail = imagecreatetruecolor($thumbWidth,$thumbHeight);

The lines that begin with “//” are comments – they aren’t code. Even if you can’t read all the other lines, these ones probably make sense to you and you should be able to get some idea of what’s going on there from them. (I was manipulating an image on a web page)

Just like code has comments, DNA has sequences that indicate to the proteins that read them that the sequence that follows is not meant to be read. While in some cases the DNA is just junk (not used at all), usually this is to signal to the cell that the information following doesn’t code for that particular cell. Because each cell carries a copy of the entire genome with it, it uses these as a way to signal which code is actually important to the cell reading it. Unlike code, when a cell reaches a bit of data it doesn’t need, it actually cuts it out of the sequence rather than just ignoring it.

When it Goes Wrong

Sometimes, the data that defines a system becomes corrupted. In the world of the digital, this can be from noise, failure during transmission, or the (very) occasional computational error. In the world of biology, errors are called mutations, and they can range from positive to horrifying.

When DNA becomes corrupted, some of the possible outcomes include cancer and tumors. They occur when a cell that has a harmful mutation is not stopped from dividing. The error spreads until the harm either interrupts the necessary processes of life, or the cells with the mutation all die.

In particular, the situation is similar to a very peculiar kind of computer issue. In a *nix system, a process can be cloned from a parent process (becoming a child process) using a system function called fork(). In a way, this is much like a cell’s ability to divide into two identical cells, and when it goes wrong the computer can lose control much like a body can. When fork() is either programmed incorrectly, or repeated intentionally by a malicious user, resources are used until the OS is depleted of them and subsequently crashes. Ouch.

When it’s Meant to go Wrong – Viruses

This is a really interesting connection to me because the terms match. As well, both are horrifying.

Just look at this. Isn’t it creepy? Its technical name is bacteriophage

Seriously, though. Imagine thousands of these guys latching onto your cells, splitting them open, and then using your dead cells to ‘give birth’ to more! (An approximate term since technically, viruses are not living creatures) Similarly, computer viruses tend to attack the normal functions of otherwise good code by exploiting some weakness the developer didn’t think of. The virus then usually has some code that uses your infected computer to infect other computers. Creepy!

I think that’ll be all for now. brb, time to go update my anti-virus…

This post inspired in-part by Bert Hubert’s magnificent page on the same topic. Please check it out! It’s a lot more comprehensive (but a bit more technically demanding, and written for programmers) 

The Handshake

When you see someone you know, what’s the first thing you do? Do you wave? Exchange a simple ‘Hi!’ / ‘Hey, what’s up?’ When I run into someone I know, I have my own kind of handshake, a unique expression that only those I know understand. It’s simple, really: standing up, I put my arms out to my side and wiggle them like a periodic function. I can identify myself to those who understand the gesture from very far away without yelling, so it comes in handy sometimes. To those who don’t know me, I just seem like a lunatic. The way I see it, it’s a fair trade-off.

The interesting thing is, when computers want to communicate they use a similar system. Here, information theory has taken a lesson from the communications field.

The TCP Handshake

The most common kind of network connection by far is the TCP– The Transmission Control Protocol. Developed by the RFC in the early ’70s, it’s the most-used method of transmitting data. When your computer wants to receive data from a server, it will perform a handshake like the one above, but a little more technical than some arm wiggling. An example is provided below:

A three-way TCP handshake exemplifying the SYN-ACK pattern

A typical series of ACK packets. Notice it starts win SYN, follows with SYN,ACK and then ACK. Thanks to PacketLife.net and the author, Jeremy Stretch for the example

The above was captured using a network tool called Wireshark, which can display a lot of cool information about the traffic on your network. This example shows the two relevant IP addresses at the top (192.168.1.2 is a local address, The 174 address is a server somewhere) Each uses the SYN and ACK message with each other. SYN stands for synchronize. ACK stands for acknowledge. They are used like this:

1) The computer requesting information (the client) sends a SYN message, telling the server it wants to request information.

2) The server sends back a SYN-ACK message, telling the client that the server is running and waiting for a response from the client

3) The client sends an ACK message, ending the handshake.

Now, the client can send ACK messages with identification of the data it wants, and the server can recognize the client. It’s also a neat feature to note of the TCP protocol that both the client and server keep a counter of what message number they’re on, so that messages that pass out of order can be fixed up and incorrect messages (that occur naturally when data is corrupted by interference) can be identified by their number and sent again.

Like my visual handshake or a literal one, two entities can establish a connection and begin speaking to each other in a short number of actions.

Addendum: Recursion in Language and Music

Last week I wrote about language and how we speak to computers. This week, I’d like to add a little bit to that.

First for discussion, some pictures from a fantastic book by Douglass Hofstadter. If you’ve enjoyed my blog so far, his book Godel, Escher Bach is one to pick up. He discusses many similar topics as described here. A few weeks back, I talked about recursion and where it appears in nature. Last week, I talked about digital languages. Now, I’ll share this interesting display of recursion in natural language.

IMG_20141019_181003445

Here, Hofstadter shows a diagram of a set of possible simple sentences through grammar rules. Following any arrows in the diagram, a grammatically correct sentence can be formed using the correct parts of speech. On the next page, Hofstadter shows the cool part.

IMG_20141019_181052304

Hofstadter refers to each series of words as an RTN, a Recursive Transition Network. The small, dotted box section represents a whole new series of words that could be inserted into the original sentence. This is only one example of possible sentence-ception. As long as the entire original sentence structure can fit nicely in place of any of its own elements, this process can occur. One important distinction to make is that if a sentence is placed inside another, the original sentence doesn’t end until the inserted one does. When there are many levels of recursion, CS/IT people often describe starting new inserted sentences as going down, and returning from them as ‘bubbling up’.

I think it is also interesting to notice that regardless of the proper grammar, people my age have a tendency to exhibit this to provide background knowledge along with an explanation. More than a few times my friends and I have halted an explanation because we realized that our friend had no idea what we were talking about, prompting us to go off on another five-minute explanation before returning to the original point.

So we see recursion in language, but were you aware it makes its share of appearances in music? One of my all-time favorite musicians was particularly fond of the idea. This musician is Johanne Sebastian Bach, and he wrote a number of fugues, which I’ll discuss in a moment. I’ve included a computer-generated visualization of his most famous here by the incredible smalin. It so happens that the topic arose so close to Halloween, and since that is what most people associate with this piece, I figured it was a good time. (Even though the middle section is a very pleasant toccata and isn’t creepy sounding at all!)

Not all of the piece is a fugue (the other section is a toccata, obviously) but it provides the concept for discussion. A fugue always starts with a simple theme (a unique musical idea in one voice) and then re-uses it, exploring it further. The theme always re-appears, traveling from voice to voice. But what Bach does so well is to arrange the theme so that it often fits inside itself, inverts, shifts up and down, stretches and more. Bach will often take one musical idea and even pair it with a distorted version of itself, harmonizing the two! And believe me, that takes more skill than one might imagine…

This kind of composition is both artistic and mathematical, which in my mind makes Bach one of the most brilliant musicians of all time. If you’re looking for a more concrete example of what I’m talking about and want something a little less spooky, I can also suggest his ‘Little Fugue’:

They’re not the best examples ever of recursion, but they should be noted, I feel. The determinism of Bach’s compositions is almost beautifully mechanical. The intervals are played with in a deterministic fashion, leaving only the loose ends of harmony to the intuitive process.

I hope that’s satisfied your appetite for digital reflections for the week. Check back later for more cool examples!

Language Processing: Do Computers Do It So Differently?

When we think of our computers and the language they speak, we tend to jump to the most colloquially known: But we’ll take it a step past NCIS…

It’s not so simple. (And yes, two people typing on the same keyboard is just ridiculous) Just like human language has evolved, so have the languages computers speak and read. Let’s draw some parallels!

Binary

An image like this probably just appeared in your head, right?

Okay, we’ll start here. Your processor (the CPU) works entirely in binary. Binary is important because we store values in HIGH/LOW states by using a differing voltage, but it’s only the most raw form of computer data and processing. It’s useful because of the logic gates I mentioned last post. But most data in a computer exists in some other forms.

A basic character

Here’s some necessary info for the next parts, so I’ll make this quick…

Characters in our computers are stored in two main forms, ASCII and Unicode. More about that here.

We’ll focus on ASCII here, in which 8 bits code for each character. Depending on the number that the 8 bits represent, a certain character is represented and can be looked up in a table like this one:

Hexadecimal

If we use only 0’s and 1’s, we need a lot of them to represent larger values. We can represent up to the number 256 with eight digits. But what if we want to store a much larger number? What if we want to do it in a way that makes more sense to a human?

Human readability is important. No one programs these days with only 0’s and 1’s, although some may joke about it…

A magnetized needle… would only be useful to flip individual bits x__x (0 -> 1, 1 -> 0)

Instead, sometimes it’s useful to go a step up. A good example of this is colors. For colors we use hexadecimal. You may recognize its formatting, which codes for RGB values on web pages:

Red: 0xFF0000

Green: 0x00FF00

Blue: 0x0000FF

How did we get to F from only 0’s and 1’s? Well, each digit here is actually a character. That character is an ASCII value, so a 0 would be a series of binary 48’s, and the F’s are either 70 or 102 (all this looked up in the table above).

Just like we use a series of 8 states to represent a character, we use multiple characters to represent a color. We could keep going up step-by-step, but that would take a while… Let’s get into the more interesting parts.

Some Code

When a programmer wants to get a computer to accomplish something, she’ll open up her favorite editor and type something like this…

main()
 {
        printf("hello world");
 }

main is simply a function that every program has, and it always starts there. The code that gets executed lies in between the { and }. There’s one line here, it’s a call to a function. You might be able to guess that this code would print to the screen the message “hello world”. This is a classic proof of concept program that all programmers write the first time they use a language. It’s become somewhat of an inside joke 🙂

In english, we have grammar rules that we (formally) must abide by. The computer has these too. In many languages, for example, the end of a statement is indicated by a semi-colon. If you forget it, the computer gets confused and can’t evaluate the code.

Something a Little More Readable: SQL

Some standards use more natural terms, like full english words. Check out this SQL statement:

SELECT Name, ProductNumber, ListPrice AS Price
FROM Production.Product
WHERE ProductLine = 'R'
AND DaysToManufacture < 4
ORDER BY Name ASC;

This queries a database, looking for data about manufacturing. A lot of it probably makes sense. We want all the data from the table called Production.Product if its Product line is ‘R’, and its days until manufacturing. Specifically, we want to select the name, the product number, and its price. SQL is coded to be written in english terms, so it looks a little more friendly.

Since this is a big topic, I’ll pick up again next week with natural language processing, a fascinating topic. See you then!

About My Image…

Why a Fern?

The choice of header image here is not accidental. I decided to use the shape of a fern as the splash image for my blog. Why? Well, ferns grow in simple recursive patterns, with stems that grow out of each other in a pleasant pattern. My hope for this blog is to connect the very theoretical concepts employed in computing technology today to the natural concepts and mathematics that inspired their use. A fern captures this spirit, because it displays simple recursion in a natural way. I knew when I began this blog that I wanted some type of plant for the main image, because very many plants grow in recursive structures. This concept was one that I wanted to tackle from the outset- because it was such a building block for both organic life AND our computation technology. Without the structure (that my field has so lovingly called a tree with respect to the greatest designer, nature) many of the operations that you perform daily would be much slower. The tree structure allows for relatively easy reading and writing of organized data. Without it we would have to search all of our data just to find the one thing we need.

The audience I would like to address with this format ranges from those who are barely technologically literate to technology experts. My hope is that those new to computers will see the beauty of how a system of many simple decisions can become something immensely complex, and those versed in technology can see that their passion and tools come from our understanding of nature, electricity and even ourselves. With the image, I hope to draw new readers with the beauty of the pattern, and catch the eye of the odd IT professional with imagery of everyone’s favorite data structure.

My title also addresses this dichotomy, in the form of a bad pun. (Sorry!) Fractals are reflections of themselves, and my posts are reflections. And it’s all digital! And about things of a digital nature. So, maybe now that makes some sense.

Hopefully I can persuade you to take a closer look the next time you wonder how that one webpage implements a cool feature, or how that cool tree outside your house grows in such a cool pattern. They may be more closely related than you think! That’s all for now.

-Alex

The Power of Boolean Logic

This week I’d like to discuss why this is possible:

Perhaps George Boole is a name you’ve heard of. Perhaps not. He made an astounding contribution to the world, and I’d like to take the time to acknowledge his work and the influence he has made today. And what his contribution could possibly have to do with minecraft.

Boole’s most relevant work (in this context) is his idea of Boolean Logic. So what is it? Boolean logic systems process binary values, meaning that there are only two options any given variable can be. George only worked in terms of the values true and false, but they could be represented as 0 and 1 (in the computer world), HIGH and LOW (in the electronics world), or WIN and FAIL if LOLCODE is more your kind of thing. Trust me, that’s one hyperlink to follow if ever there was one, because there’s nothing better than typing this:

HAI
I HAS A N ITZ 0
I HAS A FACTORIAL ITZ 1
IM IN YR LOOP UPPIN YR N WILE N SMALLR THAN 17
VISIBLE SMOOSH N "! = " FACTORIAL
FACTORIAL R PRODUKT OF FACTORIAL AN SUM OF N AN 1
IM OUTTA YR LOOP
KTHXBYE

into your compiler and getting a working program, just like any other language could produce. (The reason for this is its Turing completeness)

So how does this all tie in together?

Well, The minecraft CPU and LOLCode share some key properties.

  • They can process an input
  • they can save and recall values inputted
  • They can evaluate boolean logic statements

Boolean logic is fairly straightforward. It works in base 2 instead of base 10. When your digits can only be one of two values, the number of operations you can perform is pretty limited. There are many boolean operations possible, but I’ll break down the basic ones that are the most important:

  • AND: Are both digits being compared set to HIGH/1? If they are, the result is also HIGH/1. If either or both values is LOW/0, the result is LOW/0.
  • OR: Is either (or both) value HIGH/1? Then the result is HIGH/1. If neither is, the result is LOW/0
  • XOR: Exactly like OR, except if both values are HIGH/1, then the result is LOW/0

These three provide us the ability to compare whether a number is equal to another, or whether one number is greater than the other, or to compute the four basic arithmetic operations, or… well, the list could go on. But there are a few other operations that are worth mentioning too:

  • NOT: Takes one input, and returns the logical negation. Any input that goes in HIGH/1 comes out LOW/0 for a string of digits, each digit would be negated and the whole string would be returned.
  • Equality operator: Because we can take two values and compare them digit by digit with another, we can see if values are set to the same thing, and we get a yes/no answer from the computer.

You may have seen these same concepts in this form, a truth table.

In the video, the author flips a series of redstone switches to input a binary value. I should note that more formally, the minecraft ‘CPU’ is more appropriately called a representation of an ALU, the portion of the CPU that specifically handles arithmetic and logical operations. The author puts in the instruction

LOAD 20 + 15 OR 170 WRITE out

This tells the ALU to calculate the value (LOAD) of 20 + 15 (Entered in binary as 20: 00010100 (+) 15: 00001111) and then perform an OR calculation on the answer with the value 170: 10101010

Now, it’s important to keep in mind that George Boole was a philosopher first. He lived in the 19th century! This was long before the computer or even the transistor, which we’ll get to next. George was developing the logical building blocks that computers would use about a hundred years before the mechanical components that could perform them came to be! To George, there was something about these basic ideas of truth and falsehood that was very important to both philosophy and mathematics.

Enter the Transistor

In the early 1930’s, Bell Labs was searching for a way to make telephone lines carry signals longer distance. They had no idea that the solution to their problem would also enable us to build computers as we know them today. A transistor is straightforward and works like what we have described before. It has three parts:

A transistor is an electrical switch, this is its notation

  •  The base/gate: The ‘input’ – we can test whether this is above a certain voltage (HIGH), or below it (LOW)
  • The emitter/source: Where a HIGH voltage is supplied (the power that runs the circuit)
  • The collector/drain: Where the output voltage goes. Values are read from the collector.

The RAM in your computer is just a really large array of transistors which can be written to and read from very quickly and randomly (as the name implies). When we speak about transistors as switches in a computer, we use the term bit. Stack up 8 bits and you have a byte. Enough bytes and you start measuring in kilobytes (1024), megabytes, gigabytes, terabytes and beyond. (Once you get to exabytes, you’re in measuring range of Google’s total data storage capacity).

Now if we scale it up…

With these pieces of knowledge, the novelty of the minecraft computer comes into light. The author simply realized that the redstone blocks in minecraft functioned like transistors, and decided that since you can build one… why not build a lot and use it as memory? And how about an ALU? Take another look at the creation – it’s massive!

The scale is important. It takes a lot of redstone-transistor models to perform a simple binary calculation. This can help put the advances of modern processors in perspective. The number of transistors in today’s processors is not measured in ones, hundreds or even thousands. Today we count by the billions. And because of Moore’s Law, we can see that about every two years, that number doubles. 

This is big. The use of many states means that we can calculate more figures faster, and store more figures before running out. We can write, edit and save bigger files faster, and compute more complex operations by repeating simple ones very quickly. (Graphically heavy processes, like modern day games, require heavy real-number computation (non-integer arithmetic, e.g. 2.45/1.5, 6.83944783 + 0.0045627761 etc. measured in FLOPS).

Where Does This Leave Us?

When we turn on our computers today, we don’t think about the individual numbers that are working to produce the cat video on our screen or the word document we’re typing up. All of those little computations are abstracted out for us. Many levels of computation power the visual, auditory and mechanical processes that we use to send email or shoot digital pigs with birds in our spare minutes. Even modern programmers enjoy a level of abstraction from the basic computations – modern languages have built in functions that hide the messy details of memory management and binary arithmetic. You can just type

int a = 5;

int b = 10;

int x = a + b;

And x would be set to 15. If I wanted to print it out, I would type something like

System.out.println(x); (Java is the language I am using here)

Are we in a simulation?

Buckle down, this is where things get WEIRD. Have you ever pondered this question yourself? Hopefully you haven’t worked yourself into an existential crisis, but it’s worth considering. This absolutely fascinating paper by Nick Bostrom analyzes the statistical probabilities underlying this question. The logic kind of goes like this: We can simulate complex processes with computers, so isn’t it possible we exist inside of a simulation too?

Bostrom asks us to consider a scenario in which the human race would like to know every possible outcome of its universe. If its universe were finite and deterministic, could it not be simulated given a large enough number of states?

What do you think? Is life too complex and intricate to be finite or deterministic? Do you think computers may be able to predict multiple branches of the many worlds physicists theorize about? I’ll leave that for you to ponder…

What Are Fractals?

And how do they relate to recursion? It’s simple-complicated. You’ll see.

Fractals are beautiful multiple ways, and mind-bending in all of them. The most accessible way is to view them visually. It’s probably the most accessible just because it looks so trippy, but they’re generally enjoyable to watch (although repetitive once you get the point). Although I probably wouldn’t watch all 15 minutes, a few moments of speculation illuminates some interesting qualities. Most obviously, as the camera zooms, a general similarity can be seen between the ‘outer’ or ‘upper’ levels and the lower ones. In fact, there is usually a small set of simple patterns that repeats over and over again! That is: when zoomed in or out, the original set displays the same details and shapes as the zoomed set!

And they ARE trippy…

So what about recursion? Ah, well here’s where it gets interesting. Before, when we defined recursion, we described a scenario in which an original process began a new version of itself, which also had the ability to create a new version of itself. And so on and so on. Visual representations of fractals display the same self-similarity property of recursion, but no particular spot in the fractal can be determined as the ‘starting point’. You could take any view of a computer-generated fractal and zoom in OR out, and you could zoom as long as you want. You would still see a self-similar image; something close to the original you zoomed from. This raises a question though: if it doesn’t have a starting position, it probably doesn’t have an ending position… so how does it even exist?!

Often in mathematics, the solution to a problem seems impossible (like this one). Numbers that aren’t real can exist for a moment just to make an equation work (i) and infinities can boggle the mind, existing as the definition of the limit of human comprehension. We tend to say to ourselves when learning about something mathematical “when will I ever use this?”, but if we stopped and asked ourselves “where in the world does this come from?” we’d be more on the right track. The gut reaction is to say “this is some far-off concept that I can probably never benefit from, why should I pay attention?”. But to take this aim is to miss the finer point. To stop, to analyze when a question like this arises is the quality of the curious. One who can will find uses beyond what they had hoped for. Hopefully a proof, whose question seems straight-forward enough, but taken to a serious level of rigor, can prove to you that the most abstract of problems can be worth the mental effort.

I first happened upon such a proof in David Foster Wallace’s Everything And More, in which he proves that two lines with different lengths can be made up of the same number of points. The most basic right triangle can be used for this proof. Pick any point on the hypotenuse of the triangle (which in this example sits with its hypotenuse up), and draw straight down. For any point you could choose, there is exactly one point directly below. Even though we can prove quite easily with the pythagorean theorem that the hypotenuse is longer than the bottom leg, we can also prove that it is made up of the same number of points.

But how can that be? Ah, well I’m glad you’re still reading. The answer is wild! Both lines contain an infinite number of points! Just like the zoom in our fractal, you could zoom in on any extremely small point on our line and see a smaller denomination of it. The same idea can be expressed on the number line as well. In that form, we can say that there are an infinite number of numbers between 0 and 1 if we keep adding decimal places. With each addition of a decimal digit, our zoom on our number line increases (or precision, in more formal terms).

This same infinite-and-also-infinitely-divisible property is true of our fractal. Because the fractal exhibits recursion, but doesn’t have a clear beginning or end, I propose we look at it like the real number line. Rather than try to grasp where the beginning and end must be, we have to accept that there isn’t either of them. Instead, a snapshot of a fractal is simply a visible range from inside of a whole which is itself circular. You can’t see the entire circle from the inside, but you can see a certain distance in both directions. And whichever direction you look, you could certainly look closer (more precisely) and see more!

So I will leave you with my final thought: I propose that fractals are doubly recursive because they exhibit the self-similarity and step wise patterns of recursive processes, but exist as a big loop. Or as you might know it… It’s turtles all the way down.

See you next time.

What Is Recursion?

Recursion is probably the most curious and entertaining concept I have learned in my 16+ years of academic experience. Formally, it falls under the curriculum of the information technologies. I believe that it belongs under a more general curriculum, and I hope that you will agree with me by the end of this post.

 

Recursion is everywhere. It is inherent in nature, used in technology, makes appearances in mathematics and is part of the human form to some extent. Recursion has the power to make a simple pattern or rule become alive with the most exciting complexities. By its definition it arises from iterative processes.

 

So what is it?

 

Recursion is the concept of a process which begins a new process like itself before it ends. Generally, when this happens the process that is spawned (called the child process) inherits some property of the spawning process (called the parent process). Perhaps an example will help. We can look at the fibonacci sequence as a widely recognized example. It goes like this:

Example 1: The Fibonacci Sequence – Recursion in mathematics

1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89… and so on.

 

So where’s the recursion in this? It’s surprisingly simple. The first two values F(1) and F(2) are set to 0 and 1 or 1 and 1 (it turns out the same either way). To compute any number in the series F(n), the two previous numbers in the sequence must be known.

To find F(n), the formula is: F(n) = F(n-2) + F(n-1)

So: F(3) = F(2) + F(1). This becomes: 2 = 1 + 1

Then: F(4) = F(3) + F(2) Becomes: 3 = 2 + 1

And so on…

 

In  this case, the property that is passed on from the parent to the child process is the actual numbers. This is a good example because it shows the most basic mathematical idea of recursion in action. What comes after is a reflection of what came before, with simple addition as each step in the process. But what about something more complex?

 

Example 2: The Golden Spiral and Plant Growth

Many have heard of the golden spiral. It looks like this:

Recursion in action!

The Golden Spiral

This example builds off of the last one. So why is this spiral ‘golden’? It exhibits the golden ratio. (And if you choose a square with side of 1, the squares grow in the fibonacci series too!) It’s very similar to the fibonacci sequence in that the sum of the parts grows in relation to the previous parts. In this case though, the proportion between the previous two steps and the next step is held constant for any step in the process. You can see it visually here, too; The sides of the rectangles display the proportion. Or you can see it in plants! (Really, most plants display recursion naturally!)

See the recursion?

Lots of plants display this type of recursion!

The Conclusion?

Recursion can be well formed mathematically, or shown visually or even found outside your window. But what is the deeper concept?

Recursion is process in action! And process is all around us. And with that, I hope you have come around to the beauty of this simple but incredible concept. In the world of programmers we call this type of beautiful simplicity ‘elegance’, and we are always looking for it.

Bonus stuff!

Now that you’ve read this all, you might get a kick out of these fun jokes from the ‘mathemusical’ vihart or Google itself.

….in the future I promise to link to vihart less. But you should check out the rest of her videos too…