Test Linux/Windows 11 performance when run unit32.max() times increment time cost

2022/7/26 5:22:49

本文主要是介绍Test Linux/Windows 11 performance when run unit32.max() times increment time cost,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

Ubuntu

#include <chrono>
#include <iostream>
#include <limits.h>
#include <uuid/uuid.h>
using namespace std;

void testTime(int x);

int main(int args, char **argv)
{
    int x = atoi(argv[1]);
    testTime(x); 
}

void testTime(int x)
{ 
    cout<<numeric_limits<uint32_t>().max()<<endl;
    chrono::time_point<chrono::steady_clock> startTime;
    chrono::time_point<chrono::steady_clock> endTime;
    for(int i=0;i<x;i++)
    {
        startTime = chrono::steady_clock::now();
        for(uint32_t j=0;j<numeric_limits<uint32_t>().max();j++)
        {

        }
        endTime = chrono::steady_clock::now();
        cout << i<< "," << chrono::duration_cast<chrono::milliseconds>(endTime - startTime).count() << " milliseconds,"<<chrono::duration_cast<chrono::nanoseconds>(endTime-startTime).count()<<" nanos!" << endl;
    }
}

Compile

g++ -std=c++2a *.cpp -o h1 -luuid

Run

./h1 10

Snapshot

 

 As the above snapshot illustrated when run 4294967296 times increment,in Ubuntu 20.04,c++ will cost approximately 2.2-2.3 seconds.



这篇关于Test Linux/Windows 11 performance when run unit32.max() times increment time cost的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程