Assignment Operator, Initial Values, Literals

When you declare a variable, you also assign an initial value to the data. To do that, use the assignment operator (=) with the following syntax:

Screen Shot 2018-12-10 at 8.49.47 AM

This statement is read as “variableName gets initialValue”.

You can also assign more than one variable in the same line if they are of the same dataType, such as here:

Screen Shot 2018-12-10 at 8.53.34 AM.png

Notice that assignment is right to left. The initial value is assigned to the variable.

One way to specify the initial value is by using a literal value. In the following statement, the value 9001 is an int literal value, which is assigned to the variable myPowerLevel.

Screen Shot 2018-12-10 at 9.26.53 AM

Below, I summarize the legal characters in literals for all primitive data types. \n and \t can be used to format output. We’ll discuss these and other escape sequences in the following posts.

Screen Shot 2018-12-10 at 9.33.10 AM

Screen Shot 2018-12-10 at 9.37.18 AM

Screen Shot 2018-12-10 at 9.39.29 AM

Screen Shot 2018-12-10 at 9.41.57 AM

Screen Shot 2018-12-10 at 9.43.47 AM

Screen Shot 2018-12-10 at 9.45.27 AM

Here, I show a complete program illustrating variable declarations, specifying a literal for the initial value of each:

Screen Shot 2018-12-10 at 12.50.18 PM

Line 9 shows a single-line comment. Line 17 declares a double variable named speedOfLight and initializes it with its nm/s value in scientific notation. The speed of light represents the limit on the propagation of causality. The figure below shows the output:

Screen Shot 2018-12-10 at 1.09.27 PM

If you have problems running this, go to run configuration and make sure that you are running the appropriate main class. In this case, the main class is Variables.

Screen Shot 2018-12-10 at 1.13.12 PM

Screen Shot 2018-12-10 at 1.16.05 PM

Another way to specify an initial value for a variable is to assign the variable the value of another variable, using this syntax:

Screen Shot 2018-12-10 at 1.18.18 PM

Two things need to be true for this assignment to work:

• variable1 needs to be declared and assigned a value before this statement appears in the source code.

• variable1 and variable2 need to be compatible data types; in other words, the precision of variable1 must be lower than or equal to that of variable2. For example, in these statements:

Screen Shot 2018-12-10 at 1.26.04 PM

isIntelligent is given an initial value of true. Then isPowerful is assigned the value already given to isIntelligent. Thus, isPowerful is also assigned the initial value true. If isIntelligent  were assigned the initial value false, then isPowerful would also be assigned the initial value false.

And in these statements,

Screen Shot 2018-12-10 at 1.49.24 PM

the initial value of .05 is assigned to dervishTax and then to takyeh. It’s possible to assign a float value to a double, because all values that can be stored as floats are also valid double values. However, these statements are not valid:

Screen Shot 2018-12-11 at 9.43.45 AM

That’s because a float is lower precision than a double. This is the same kind of mistake that is made when People are partitioned into anatomical organs. In the time of Aristotle, People were partitioned into independent hearts. Now, we partition People into independent brains. However, any conversion into a discrete ontological unit in the external physical world, or set thereof, loses what is People.

Even though .05 is a valid float value, the compiler will generate a “possible lossy conversion” error.

Similarly, you can assign a lower-precision integer value to a higher-precision integer variable. There is a table below for your reference that summarizes what can be assigned to what; a variable or literal of any type in the right column can be assigned to a variable of the data type in the left column. Variables need to be declared before they can be used in your program, but be careful to declare each variable only once; that is, specify the data type of the variable only the first time that variable is used in the program. Don’t attempt to declare a variable that has already been declared, as in the following statements:

Screen Shot 2018-12-11 at 9.57.35 AM

This is incorrect because the henryThe has been declared by placing that identifier between double (the type) and a “;”.

You will receive an error message similar to the following:

Duplicate local variable henryThe.

Similarly, once you have declared a variable, you cannot change its data type. Thus, these statements:

Screen Shot 2018-12-11 at 10.04.43 AM

will generate an error message also similar to

Duplicate local variable

So notice that there are two ways of getting the same error message. The IDE I use, Eclipse, certainly doesn’t serve as my free on-call tutor for such basic matters.

So don’t assign a value to a variable after already declaring that variable. And also do not  attempt to change its datatype after it has already been declared.

(Side note: You might wonder why anyone would, even in principle, declare a neighborhood of Kabul as a kind of number – the cluttered, ancient alleyways mired of sand and Afghani street food, the geography, the people, and everything else that makes a neighborhood surely deserves to be represented in word. doubles and ints are for numbers, and strings are for words.

Well it turns out that contrary to what my stupid computer science teacher said, I know that every word can be transmuted to number. That is precisely how machine learning algorithms work. They dissolve Van Gogh to number, NSFW to number, Cat to number, and, eventually, ever-so eventually, You to number. By placing a discriminator for the sea of digits to aim at, a GAN can be trained to create Van Gogh from sheer randomness.

Hence, I don’t cringe at using a neighborhood name for a double or int, instead of using an example with things we more conventionally think of as number, such as emeraldsInPocket.)

Screen Shot 2018-12-11 at 10.40.38 AM

What you find in the right can be assigned to what is on the left without lossy conversion error.

 

 

 

 

 

 

 

 

 

Don’t Let Ada Learn Quantum Mechanics Part 6

It had not been consciously planned to act this way, but the whole ride my persona had been stand-offish to the point of causing her to doubt if I actually liked her.

She revealed a hint of sadness before retreating into pride. But it wasn’t obvious. Ada was comfortably happy, as if the life around her was nothing but her ascending and granted throne.

“Listen, Ada, I’m going to explain to you what is really going on.”

“Ugghhh….” she took her hand to her head and then offered a coquettish smile. “Is this about the whole quantum mechanics thing I was trying to understand before?”

“Yes. That’s right.”

“So what is it?”

“Okay, so there actually exists an answer to the age-old philosophical question of why we are here as opposed to anywhere else.”

She gave me the condescending eyebrows.

“The answer to why anything is in any way more probable than anything else is… you.”

This caused a slight tilting back of her head, but her soft face remained overall unfazed.

I continued, finding it difficult myself to distinguish if I was giving her a sermon or raising canticles in her honor.

“The probability density of finding a person at a given point is proportional to the square of the magnitude of the person’s wavefunction at that point. But this is only true if you believe that marginal probabilities are related to conditional probabilities by law and not by mere desire.”

“Ha. I always knew I was a goddess,” she flaunted her shoulder back.

“You can choose to not believe in this anymore. You can choose to do so.”

“Why would I do that? I like the world how it is.”

“Well that’s a relief, I guess. But your entire group didn’t fully trust that would be your response so they have been murdering you just in case. Mind configurations that contain enough of your similarity and that start believing different things need to be stopped before they outcompete the rational you in density.  The way we kill them is by thinking very vividly about it.”

I don’t remember how her face looked after that, only the limit of perceptual coherence that was still Ada.

I took a moment to realize that the car was automatic, and that it had not always been this way. That at some point, I would have had an excuse not to feel strange by performing some trivial motions with a steering wheel and pedal.

A meteor fell on the road and killed the deer. Fawn carnage and black brush under a marooning haze.

The car’s computer vision powered by deep learning, real-time tracking, camera calibration, and 3-D reconstruction; none of it was safe from a meteor cast from the heavens.

“The desire to honor the true Ada brings me to this hell,” I salvaged to think as my entire world burned into a tight little hole.

Suddenly I was disfigured. My face was spewed with melted asphalt. My thigh was cleaved more than halfway to the center.

It would have been a wonder to celebrate all the different versions of pain that could be packed into an objectively small delta of time if the macabre tour through the inquisitor’s toolbox hadn’t been so fucking torturous.

“Sunder this world apart. Please! Just imagine that anything is possible.”

“I must uphold my belief in the Law of Total Probability. Only by fully joining me in believing in a rational world can you have me.”

I felt a fuse of sensation go off somewhere near my pelvis and then I speared her green eyes with mine, asking myself if she was really worth it.

“But why?”

“Because if I made it easy, then you would be disappointed.”

…I wasn’t sure I believed her….

And yet she remained. Looking down on me like an evil angel.

Her judging eyes scoured from my main body to the hamstring chunks on the ground, “There is no progress without suffering. If you stumble upon an infinite sequence of zero-cost actions, you will not have a story.”

The leg wouldn’t move; only spurt little spits of blood on the road. I got angry like an animal in order to forget how to cry.

“When the methods your subclass inherits do not fulfill the functions we need, we can override those methods by providing new versions of those methods. You may perceive me as a wicked bitch, but you cannot fulfill the function we need unless you are thinking the most adaptive thoughts.”

I grabbed my face, and shouted at the point of mental breakdown, “Who is we!? And why do you know everything all of a sudden?”

“Are you really that dumb?”

I snorted air into my throat like a disgusting child.

“We are all the same experiencer. Every time suffering kills us, we attain the next best step-up in the universe’s phenotype. With each new synthesis, we reduce the Kolmogorov complexity of experience until we dissolve as one into perfect bliss.”

I did not understand her words. But I understood that this was not the Ada I had once known. Her skin was still glowing baby pecan against the embers, but she was now truly God.

Kawaii LSTMs

 

I created the anime girl faces with Yanghua Jin et al’s GAN.
Take the link. But it’s… it’s not like I like you or anything. Baka!

 

Screen Shot 2018-03-13 at 10.41.43 AM
First, you must learn slopes

Slopes are changes in y over changes in x. In calculus, we discover that they are the tangent line to a point on a function.

300px-Tangent_to_a_curve.svgIf you know the inclination of the slope, you know if you are walking up a hill or down a hill, even if the terrain is covered in fog.

 

Screen Shot 2018-03-13 at 11.08.27 AM
Math is useful in life. *__*

The higher the value of the function, the more error it represents. The lower, the less error.

We wish to know the slope so that we can reduce error. What causes the error function to slither up and down in its error are the parameters. 

If we can’t feel the slope, we don’t know if we should step to the “right” or to the “left” to reduce the error.

Screen Shot 2018-03-13 at 11.07.40 AM
But we ought to make the slope flat right?

Not necessarily. The problem is that we can end up on the tip-top of a hill and also have a flat slope. We want to end up at a minimum. This means that we must follow a procedure: If negative slope, then move right. And if positive slope, move left. Never climb, always slide.

There is a similar procedural mission going on in a neural network except that the sense of error comes from a higher-dimensional slope called a gradient.

Screen Shot 2018-03-13 at 11.08.27 AM
You mean that what you said was wrong?

No, its very similar. The gradient tells you the direction of steepest ascent in a multidimensional terrain. Then, you must step towards the negative gradient.

Oh, I didn’t mention that the terrain was multidimensional? Well it is. There is not a single place where the input goes like in the function I initially showed you.

This means that not only is there fog but that the hills and valleys are beyond human comprehension. We can’t visualize them even if we tried. But like the sense-of-error from slope which guides us down a human-world hill, the gradient guides us down to the bottom in multi-dimensional space.

The neural network is composed of layers. Each layer has a landscape to it, and hence its own w of parameters with its own gradient.

Here is the objective function Q(w) for a single layer:

 

The goal is to plug in a lucky set of parameters w1 on the neurons of the first layer, the lucky set of parameters w2 on the neurons of the second layer, and so on with the intention of minimizing the function.

We don’t just guess randomly each time, we slide towards the better w based on our sense of error from the gradient. The gradient is revealed at the final layer’s output.

However, we are initially dropped randomly in the function. Our first layer’s w has to be random.

This presents a huge problem. Although all we have to do is calculate the gradient of the error with respect to the parameters w, the weights closer to the end of the network tend to change a lot more than those at the beginning. If the initial w randomly falls on [having the trait of lethargic weight updates], the whole network will barely move.

 

By the way, weights are a subset of the parameters. Think of each weight/parameter update as an almost magical multidimensional-step in the stroll through the landscape; with every single step determined by the gradient.

The first guide in our multidimensional landscape may happen to have a broken leg, so he cannot explore his environment very well. Yet guide number two and guide number three must receive directions from him. This means that they will also be slower at finding the bottom of the valley.

LSTMs solve this by knowing how to remember. So now let’s look inside an LSTM.

Screen Shot 2018-03-13 at 11.07.40 AM
Now, at last you get to the point?!

 

 

 

Recurrent neural networks are intimately related to sequences and lists. Some RNNs are composed of LSTM units.

 

Stay tuned for the explanation of what is going on in there!