I Am Not A Mage Lord

Chapter 320: AI chip

Chapter 320 AI Chip

I have to say that the "aI" chip is like some kind of scourge, even if the gods continue to sing "The Word of Creation", giving Lynch ample and immense creativity, and even making him a **** at all times. Illusion, that aI chip is still struggling to give birth in the memory palace.

This kind of mystery to the pinnacle of work can affect the existence of the entire magical civilization once it is born. Lynch's idea of ​​him in his mind is after all slight.

In the eyes of outsiders, what Lynch did is like handing a set of playing cards arranged from a to k, including suits, to more than a dozen passersby to cut and shuffle the cards, and then he took it again. Randomly wash for more than ten seconds, and then restore all the card sequence as before.

It's even like handing a Rubik's Cube to a beginner, but the other party just recovered it.

These are not impossible, but the probability of occurrence is too small.

The same is true for Lynch's current "aI chip".

Just like in countless random shuffling cards, slowly kneading a masterpiece of regularity, only the magical work of nature can bear the name.

soon.

As the overall chip architecture took shape, Lynch also began to fall into an inexplicable shock!

An artificial intelligence chip similar to alphago developed by Google?

This made Lynch couldn't help but think of the trial of His Royal Highness Quinn in the Underground of Knowledge Library, that is, using Go intelligence to overwhelm the opponent and get the prophetic thread.

The logic of the past seems to converge again at this moment.

TPu?

This Google 17-year-old customized ASIC developed specifically for machine learning took only one year before being transferred to the cloud for commercial use, and it also followed the Pu and gPu route.

TPu.

Chinese name, tensor processing unit.

Speaking of the first time the masses came into contact with the name tensor, it might be based on reading popular science books such as a brief history of time.

Tensors, derived from mathematics, map geometric vectors, scalars, and other similar objects to geometric objects of the result tensor in a multi-linear manner.

Lynch didn't understand it for the first time.

However, he saw it and understood that the so-called tensor is a generalized matrix.

The vectors learned in high school are one-dimensional matrices, the cubes of numbers are three-dimensional matrices, and even the delayed numbers are also matrices.

Here, it is already in line with the neural network algorithm, and the reason why tensor has the style of pure matrix is ​​that it has dynamic characteristics, lives in structure, and interacts with other mathematical entities.

In computer science, a tensor is an n-dimensional matrix.

Lin Qi silently re-printed the pattern on the paper. He had just put out the entire mysterious control knowledge and exchanged it with the gods.

As for whether the other party will rely on this to find the specialization of becoming a god, if and remember, he doesn't care.

The fire is burning to the brows, who would have guessed whether the food will be hot or not tomorrow.

As he writes, Lynch’s writing on the blackboard becomes more and more erratic.

The trained neural network classifies the data with labels or estimated values, which is reasoning.

So every neuron needs to be calculated.

The input data is multiplied by the weight to indicate the signal strength.

The results are added to aggregate the neuron state.

Use activation function to adjust neuron parameter activity.

Step by step like this, uninterrupted.

It stands to reason that if there are only two neurons and a single-layer neural network with three inputs, the weights and inputs will need to be multiplied six times...

In this way, multiplying and fetching in the matrix both require a lot of Pu cycles and memory, and chips like TPU are born to reduce this load.

Lynch couldn't help frowning and glanced around.

To some extent, the calculated load is very similar to the load of the power grid. The maximum load determines the overall peak, and also determines the peak that he can reach after completing the "aI chip".

And there is a need to balance supply and demand, otherwise, the first thing to collapse is oneself.

It's just that he was quickly attracted by the structure of TPU again and became obsessed.

Only by going deep into a project can you thoroughly experience his fun.

So understanding is the first step.

This is also in chess activities. Chess, which is easy to get started, has a better audience than Go, and Gobang is brighter than Chess.

The more Lynch looked, the more he couldn't help but marvel at it.

The architecture of this Tpu actually uses quantum technology. In the process of approximating any value between the preset maximum and minimum values ​​and an eight-digit integer, TPU actually contains a full 65,536. The bit integer multiplier directly compresses 32-bit or 16-bit calculations into 8 bits.

The discretization of the curve is realized.

Perfectly reduces the cost of neural network prediction.

The second point is more critical.

Just like the hardware that Lynch originally admired.

The Tpu chip directly encapsulates various neural network calculation tools.

Such as matrix multiplication unit, unified buffer, activation unit, etc., they are composed of more than a dozen advanced instructions to complete the mathematical calculations required for neural network inference.

At the same time, it uses a typical RIs processor to provide instructions for simple calculations.

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like