RPi overclocking
@n3m351daOverclocking, is the process of increasing the clock frequency of the device beyond the recommended frequency. So, they call it overclocking. Now, that term is sometimes misused in my opinion. Overclocking also sometimes is used to refer to increasing the internal voltage levels to increase the speed. A better term for that would be overvolting. But overclocking is where you are increasing the clock frequency and to increase the speed of the performance of the chip.
So, one thing to note is that when you're talking about overclocking, there's more than one clock inside one of these devices. So, for instance, a typical board say, Raspberry Pi. It's got several clocks. There is a clock going into the chip, into the main processor but there's a system clock that's for the rest of the components on the board and the rate base be. By the way, system clock rate is always slower or the magnitude slower than the clock rate of the chip. Because communication from chip to chip is slower. So, you have to slow down the clock rate. Also, even internally in the processor inside the microcontroller or microprocessor, whichever it is, there are multiple clocks inside there.
So, there's one high-speed clock coming in, but there are different what they call clock domains. Different regions of the chip which can run at different clock frequencies. So, it's a little bit simple to just say, "I'm just going to be at the clock." Which clock you talking about? But we're assuming right now assuming all the clocks, basically the one high-speed clock that goes into the microprocessor that is being used to create all the rest of the clock. So, you can assume all the clocks on chip are going up together. So, what's the impact of overclocking? Now, the first obvious impact is it instructions are executed more quickly. So, roughly there's one instruction per clock period and I say roughly because this is vastly simplifying what goes on inside Computer Architecture.
I'm hiding tons of parallelism and pipelining and all sorts of details like that. But roughly, a component can only do one thing at one time. So, if you've got one adder is going to do one add instruction in a clock cycle. You've got one multiplier is going to do one multiply instruction in clock cycle and so on. So, it happens per clock cycle. Each one house per clock cycle. So, roughly you're speeding up the rate at which instructions are executed. Now, there's a downside though, because even if I say, "Wow, why don't I just crank it up forever?" So, there's a limit to this. Signals have a shorter time in which to travel. So, what I mean by this is that, and this is my quick overview of synchronous circuit design. So, there are what are called flip-flops inside these devices.
A flip-flop is part of a register, a flip-flop storage elements. It stores one bit a zero or a one. We talk about D flip-flops is to a one-bit 01. Each one of these flip-flops receives a clock, and remember that the clock is just a waveform that's low then high, then low then high square wave over time at some frequency. So, what happens with these flip-flops is, they can load a new value. Every time the clock edge goes up, every time it goes from low to high. They can load a new value. So, the rate of which they load new values like you might have an adder, and there is a flip flop at the output of the adder. There's a set of flip-flops that hold the result of the addition.
If the rate at which they load new values is the rate at which the addition operations are happening. The maximum is the best rate at which it can happen? Because if it loads one every second, then there's one addition happening every second. One additional result is captured every second? So, the trick though is that, saying you've got an adder, take an adder. The adder has some flip-flops as input holding some numbers that you want to add, and then it's got flip-flops that at his output that hold the result of the addition. So, let's say the clock edge goes up and new numbers get loaded into the input redshifts for the adder. Then the adder starts performing the addition. The result has to be completed by the time the next clock edge goes up. The reason for that is because the register at the output set of flip-flops to the output of the adder, they have to have the right result there when the clock edge goes up when they load in the new values. If the adder doesn't finish in that time between the clock edges, then these result registers in the output of the adder, are going to catch the wrong data. Then you're going to have garbage. Basically that's failure. When that happens, then the whole machine fails. Weird things happen and you get the blue screen of death. There's Windows or whatever. Things fail. So, the delay across these components like an adder, determine the fastest clock frequency you can have.
So, because as you increase the clock frequency, you reduce the time between the clock edges.That gives you less time to do each computation like an add. Now, at some point you're going to hit a limit where errors start happening. Where there isn't enough time, the result registers get the wrong data and your system fails. The question is where is that point? Now, what happens typically when chips are manufactured, as they manufacture them and then they test them to see what is the fastest rate at which they work and they don't fail. They test them at different clock like at a high clock frequency that is called speed binning, by the way. They test them at a higher clock frequency and see if it fails. If it doesn't fail, then they sell it at, they cut off 15 percent or 10-15 percent of that clock frequency and sell it at that rate. Usually what happens is, you test it a high clock frequency, it fails because of weird delays. They tested a lower, lower, lower, until finally it works. Wherever that clock frequency is, then they may be cutoff like 10 percent of the clock frequency something like that. To be safe, it's sort of a boundary and then they sell it at that. Now, this boundary that's sort of a margin that they use cutting up 10- 15 percent.
When you are overclocking, you are eating into that margin. You're saying, well I'm going to add 10 more percent in. That may work, it may not. Basically, this is all probabilistic. Maybe that works most of the time. But every once in a while, you just crash. Because there's a timing error. Some result doesn't reach a register fast enough and bam everything crashes. So, that's what you're doing with overclocking.So, there's always this risk that everything is going to fail. It's just the more you overclock, the higher the risk. Also another thing is, that when you overclock the temperature of the device increases. This happens because switching signals going from low to high, high to low it uses power and it makes heat, and so the temperature of the device just gets higher, and that's going to generally shorten the lifetime of your component because as it heats up it can only dissipate heat at a certain rate.
If you've ever seen inside a motherboard, you seen these heat sinks that they attach to these devices big old things of metal that dissipate the heat. They can only dissipate heat at a certain rate. Assuming fan cooling, you have a fan that blows over it and they can only dissipate at a certain rate, and by the way Raspberry Pi, they don't have that. They don't have a fan. They don't have heat sinks attached to the process or any of that. So, they're not good at dissipating heat. In fact, my Raspberry Pi gets quite hot. I mean hot to the touch. You'll notice if you tried doing something with it for awhile, touch that thing. When I do computationally intensive stuff with it, you touch the chip, it's actually hot like you don't want to touch it. That damages it over time and eventually fails. Increasing voltage. So, another thing is overvolting. You can do in people who often lump that in with overclocking, but it is a different thing.
So, increasing the voltage swing. So, instead of going from 0-3.3, you'd go 0-5 or something like this. Typically, the voltage swing on the chip the I/O is different than the voltage swing on the individual processor, it's usually lower on the processor. But increasing the voltage swing can increase the transistor speed. If you increase the swing go to a much higher voltage, then it takes less time to fill it with current essentially. Really simplify. So, increase voltage swing could speed it up. But energy consumption is proportional to the square of the voltage. So you increase voltage and you are greatly increasing energy consumption.
Now, if you plugged into the wall maybe you don't care. But if you are using a battery then that's a problem. Thermal effects can alter timing. So, when I say thermal effects, remember all this switching that is actually increasing energy consumptionwhich is making more heat, and that he can alter timing. So, he can change timing in funny ways and we're not going to get into the class.