In the previous post we discussed how to measure performance of a computer. Let me remind you what approaches we have tackled:
- Theoretical “on paper”: by multiplying clock by cores and thus getting FLOPS;
- Theoretical by benchmarking: by using Linpack utility. The point is that we use CPU to multiply N matrices where we know the exact number of floating point operations, and then we just measure the speed of this calculation thus having FLOPS in the end;
- Practical benchmarking: nobody (except maybe scientists) uses their PC just to multiply matrices. Any PC has some bottlenecks, and when we use them in day-to-day operations we need to know how they perform as a whole, and not only CPU- (or GPU-) wise. So we use benchmarks to test the whole system: CPU, GPU, RAM, Storage etc. to get the whole picture.
While thinking how to measure performance of some PCs newly connected to our network we came to the conclusion that Megamind needs to incorporate its own benchmarking module.
Well, anticipating your question “why haven’t you just used something existing or maybe even ‘derived’ performance from the hashrate of the PC?”:
- The most important part of a mining rig is GPU. However when it comes to practical computing, everything should be more or less balanced: GPU-CPU-RAM-Storage. Some minimal RAM (4GB for most mining daemons) just won’t work for such applications as neural net training or 3D render;
- Some applications need A LOT of REALLY FAST storage. For example, seismic data: the overhead from computational part to the storage must be minimal (computational servers can even have special partitions on RAM, but that’s another story);
- Every Megamind’s user should understand how much computing resources it will take to compute his or her task using 1 PC, N PCs, N*10^3 PCs etc. We needed some lingua franca for the whole system;
- From economic/user experience PoV: we want everybody to know what it would cost to run their task using our distributed network.
We can calculate (directly, by benchmarking, or using some indirect methods like intra/extrapolation, ML forecasting etc.) how much computing resources a task can take. It will lead to a win-win situation: buyers and sellers know the cost beforehand, and Megamind balancer can correctly forecast load among the PCs in the network.
And thus we designed 1 MINÐ as a unit of computational volume. For systems based on FPGAs 1 MINÐ is equivalent to 1 minute of operation of 100 million FPGA logic gates at 100 MHz.
For hardware based on CPU/GPU, their equivalent power is determined by benchmarking:
- We measure memory size, clock frequency, number of processor cores, access speed to the disk subsystem, network exchange rate etc.
- After that we use a regression model to express the computing power of the system in units of MINÐ per second.
For algorithms (applications) placed on the platform’s marketplace, we determine how power-hungry they are also using MINÐ.
The app owners assign the price of their app in N x MINÐ (where N is set voluntary by the owner).
The back-end clearing between clients, software owners and hardware owners is processed in MINÐ units. This calculation might not be visible in the interfaces for the sake of user-friendliness, however our system processes all the transactions in MINÐ units, and then calculates final prices in USD. The USD prices will be calculated according to the formula:
USD price = MINÐ units * USD/MINÐ rate,
where USD/MINÐ is set based on the market supply-demand model.
Most importantly, this rate is indirectly connected to the cost of ETH mining (because you can perform something like a carry-trade operation: you buy computing power for USD and then you mine ETH using MINÐ). In the end of the day it will hedge everybody within the system against the market shocks and any speculative intentions of third parties.
As you can see, MINÐ is not merely a unit of computing power but also an economic unit of exchange within the Megamind network.