Images, posts & videos related to "Cpu Bound Multithreading"
I understand why CPython needs the GIL: to prevent race conditions when counting references, which could cause memory leaks.
What I don't understand is:
How do other high-level languages (like, say, Java) avoid this problem? Is it because Java uses garbage collection? Could Python discard the GIL by using garbage collection? Would that slow single-threaded performance? Why?
Say you have a single process, multithreaded program. My understanding is that within this program, two threads cannot simultaneously execute instructions. In Python, I assume that rather than switch at the instruction level, threads must switch at the 'bytecode' level because of the GIL. Why is this a bad thing?
You have seen the news.
Python is becoming the worldβs most popular programming language :
- Stack Overflow Developers Survey 2018
- Stack Overflow Most Popular Languages 2017
However, Python still lacks important features for concurrency and functional programming.
For the concurrency, the Global Interpreter Locker impose heavy limits on the use of Python for cpu-bound multithreading.
For functional programming, even if Python is more functional than Java, it is still quite clear that it is not a language that treats well those who try a functional-first approach - and that is the will of its creator.
I think that Python cannot keep growing at this rate at the same time that it goes against these two very important trends. So I would like to know your opinion in this subject:
- Will next versions implement features that make Python more efficient for concurrency and more functional-friendly?
- Will a new pythonic language emerge that will implement these features (just like Scala is a concurrency and functional-friendly Java) ?
Brand new build. B660M Aorus Pro AX DDR4, 2x8GB 3600MHz Corsair Vengeance LPX C18.
Single thread performance is a tad low (high 600s instead of around 700) but within single digit percentage points. Multicore is abysmal. I'm seeing scores just shy of 1600 when the expected result should be just shy of 5000.
I've run memtest86 with no errors for 6 passes at XMP settings. I'm going to update the bios, but from what I've read, a bios update on this MOBO should only have a dramatic impact on the non-F sku of this CPU.
CPU usage was allegedly 100% during the test on all cores during the all core portion of the benchmark, temps were in the high 50s, but power consumption never went above about 45W or so.
Any ideas what's going on?
... there is no denying it.
Stop suggesting people upgrade their CPUs when they state that the game was running perfectly before the last update. I myself have been playing just fine on my i5-7600k since late 2017.
Yes, that CPU is indeed outdated.
Yes, I would get better performances if I bought an i13 17999KXW.
But that didn't keep me from playing at perfectly decent perfs (50-80 fps most of the time) for close to 5 years.
HOWEVER, the last patch completely MURDERED Tarkov's CPU usage. Before the game used around 60-70% of the CPU ressources, with the occasional spike up. Now it is perma-capped at 100%, killing all responsiveness from the rest of the softwares : music freezes, Discord mates stop hearing me, .... Hell, even ingame inputs get frozen sometime (i.e. I let go off the W key but the character keeps running for 1-2 secs). Those things NEVER happened before, and the strangest part is that the FPS are perfectly fine still, I'd even say better than before.
I know I am not the only one.
Last patch brought with it an issue, and I hope the devs are working on a fix. I am not asking for troubleshooting help, I tried everything. It would be great to get an acknowledgement of the issue from the devs, though.
EDIT : someone pointed out that I might be due for repasting my CPU/GPU. They were right, I was even long overdue. However, this did not change anything except better temps. Myu CPU is still capped at 100% all the time.
Hi, I (20M-Junior CS) need a mentor on a pretty advanced but not too long of an assignment I have. A lot of people are struggling and I really want to figure out and learn how to combine pipes, multi-threads, and bounded buffer to go together in this consumer-producer assignment. Please dm me if you are able to help in any way, thanks :)
I always knew EFT as a game was CPU bound.. But I never knew just how.. I run the game at 1440p normally but had to RMA my card recently. The card I am stuck with is a GTX 680.. a 2GB card from 10 years ago that does not even get driver updates (outside of security patches). I am able to get 70-80 FPS on interchange and 110 on factory at 1080p on this card... I have a 6700k clocked at 4.5ghz. I did not even consider the possibility of running this game on this card considering valorant crashed every 60 seconds of playing. Of course my details are set to arse mode and I may have enabled downsampling. Crazy stuff and I am so happy that I wont be without the game.
Hi! Just installed new i5 12400 on Gigabyte B660M mobo. Fresh Windows 10 install. CPU-Z single thread score is 665 which is around what other reports suggest. Multithreaded score is only 2412 while HWMonitor shows 100% load on all threads but just around 43W consumption. Is there anything I've forgot to do? CPU-Z https://valid.x86.fr/x1lq6w HWonitor during stress test https://imgur.com/a/UXhTRZz Thanks!
I'm running on an i7 8th gen.
So near the endgame, you start getting Zerg rushed and make builds that spam the screen up with lots of stuff. This is when the game slows to a crawl.
Is this slowdown CPU or GPU bound? Would throwing more horsepower at it (higher end CPU/GPU) or reducing visual effects help this? Is this the game engine reaching its limits? Would another game engine that's designed to handle this much stuff on the screen help at all?
The slowdown can make things easier but it also kinda makes the endgame scenarios less intense.
Games are very latency-sensitive; any increase has significant effects on the frame rate.
I've experienced this myself when trying to run compile jobs in the background while gaming. Even when they're at the lowest scheduling priority (SCHED_IDLE) that can allocate >99% of the CPU resources to other tasks, they still cause my game to lose ~30% of it's average frame rate (not to speak of stability): https://unix.stackexchange.com/questions/684152/how-can-i-make-my-games-fps-be-virtually-unaffected-by-a-low-priority-backgroun
This is likely due to buffer bloat; larger queues -> higher latencies.
RT-kernels are supposed to offer more consistent latency at the cost of throughput which should be a desirable attribute when gaming. 160fps is nice, high throughput but I'd rather have a more consistent 140fps.
Could they help this case or perhaps even generally be useful?
Has anybody done benchmarks on this? The newest I could find is a Phoronix benchmark from 2012 testing Ubuntu's Low-latency kernel which isn't very applicable today I'd say.
How do you even use a RT kernel? Would I have to give my game a specific priority?
I'm building a little client-side Blazor app for the game Elden Ring. The work that it needs to do takes about 10 seconds and is locking up the UI, and also making the "This page is slowing down Firefox. To speed up your browser, stop this page." message pop up.
I've given BlazorWorker a try in order to get this running in a separate thread/in the background, but it didn't seem to help here. Do I have any options other than to convert this to an ASP.NET hosted app and do the heavy lifting on the server?
I'm coming back after a break and don't think I realized how bad frame drops are in high pop areas. My CPU typically tops out around 78% to 82% during chest runs while GPU is around 40% to 50%. I would think my bottleneck is CPU but I've never seen it at 90+%.
Specs:
Edit: Forgot to add, I have a mix of medium and low settings and am playing at 1440p.
Iβve built a toy ray tracer previously in java, which Iβm porting to c++, Iβm now trying to multi-thread. Iβm canβt work out why I am not seeing the performance increase I see in my java implementation (In c++ using multiple threads decreases performance speed). Could this be due to the memory bandwidth accessing the pixel array? I've also heard about false sharing, but I'm unsure whether this could be a problem here, and how to combat it if so. Any tips, pointers, or subjects to read about would be appreciated!
Here is a representation of how Iβm processing the image, which shows the same performance using threads. Using rand() as an example to avoid posting a ton of ray-tracing code.
Cheers!
int image_width = 900;
int image_height = 500;
void main() {
// 4 CHANNELS, R G B A
unsigned char image_data[4 * image_width * image_height];
while (true) {
std::vector<std::thread> threads;
int num_of_threads = 8;
//Each thread handles a strip of pixels, image height / num of threads x
//full image width
for (int i = 0; i < num_of_threads; i++) {
int start_x = (image_width / num_of_threads) * i;
int end_x = start_x + (image_width / num_of_threads);
threads.emplace_back(std::thread(renderImage, image_data, start_x,
end_x, 0, image_height));
}
for (auto& thread : threads) {
thread.join();
}
//Load image into texture to be displayed.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, image_data);
}
}
void renderImage(int start_x, int end_x, int start_y, int end_y, unsigned char*
pixels) {
for (int j = start_y; j < end_y; j++) {
for (int i = start_x; i < end_x; i++) {
int index = 4 * (j * image_width + i);
//Set random values to each channel.
pixels[index] = (int)(((double)rand() / (RAND_MAX)) * 255);
pixels[index + 1] = (int)(((double)rand() / (RAND_MAX)) * 255);
pixels[index + 2] = (int)(((double)rand() / (RAND_MAX)) * 255);
pixels[index + 3] = 255;
}
}
... keep reading on reddit β‘Obviously, the game becomes harder to run with mods. But, I've got some cash kicking around and with these price drops, I've been considering swapping out my RTX 2060 for either a 3060 or 3060TI. My CPU is a Ryzen 3700x, so I don't feel like the game is TOO much for that, even with mods. (The game runs fine without mods, consistently 90 FPS. With mods, there is a noticeable lag, I feel like the FPS range is somewhere between 50-60.)
Why am I using 48GB RAM running at 2933 MHz (2x8GB 3200 Mhz and 2x16 3600 MHz kits) with my Ryzen 1600? Modded Cities Skylines uses over 30GB memory. I would have considered an Optane drive for page file usage if they were more affordable.
While I do have a RX 570 4GB and a 1080p 60Hz monitor, I intend on upgrading the GPU when prices are more reasonable and getting a second monitor with a new GPU.
The problem is that Cities Skylines remains an absolute CPU hog, and I recently installed mods that extended the game engine's limits for an even larger city, so I expect memory and CPU usage will go up with that.
The other games that I play are Civilization 6 (also CPU bound, especially on huge maps full of AI civs and city states, and I think the mods are likely making it even more of a CPU hog with all of the extra features they added) and occasionally Total War Shogun 2 (single core and FPS falls below 30 in large battles on the Ryzen 1600).
The options I am looking at are:
i5 12400, preferably with a DDR4 motherboard bundle as I would need a new mobo.
i5 12600K with a DDR4 Z690 motherboard bundle deal that I've been looking at for $397: https://www.reddit.com/r/buildapcsales/comments/u9971w/cpuboard_i5_12600k_z690_gaming_x_ddr4_397_makes/
Ryzen 5600
Ryzen 5700 (only if I know for certain if I'll be playing a game that scales to 6-7 cores so Windows 10's background services and other background applications don't impact the gaming)
Ryzen 5800X3D (if I get this, I would be riding the system until DDR6 had already launched...)
I don't plan on any major overclocking. If it's much cheaper for me to go with a non-K edition i5 and non-OCing motherboard, I'll strongly consider that as long as I can still do some RAM OCing as otherwise the motherboard might default to something like 2133/2400 MHz with the 48GB kit (which my B450 board will do on its own if I don't manually OC the RAM).
EDIT, it seems that non-K edition CPUs might have issues with RAM OCing, which makes me concerned as I absolutely do not want to be running my RAM at sub-2933 MHz:
https://linustechtips.com/topic/1406237-intel-allows-memory-overclocking-on-b660-and-h670-but-only-sort-of/
https://www.reddit.com/r/intel/comments/s3pv0h/alderlake_12xxx_non_k_ram_overclocking/
Hello all,
I'm just curious because my brother is getting a 1080p monitor to pair with his rtx 3060 ti and R5 5600x.
From what I see on this sub, gaming at 1080p leaves you CPU bound and I was wondering that surely upgrading a GPU would still allow you to push out more FPS since it's supposed to be a better GPU or would you only notice a significant difference by getting a better CPU, say an R7 5800x?
I'm getting the 8 bit cpu on the third (payday lol) and I'm really excited to build it! I'm going into Electrical Engineering + Computer Science courses this fall and I can't wait to get some hands on experience with the kit. So my question is stated in the title. I would also like some additional resources so I can go ham crazy with this!
Obviously, the game becomes harder to run with mods. But, I've got some cash kicking around and with these price drops, I've been considering swapping out my RTX 2060 for either a 3060 or 3060TI. My CPU is a Ryzen 3700x, so I don't feel like the game is TOO much for that, even with mods. (The game runs fine without mods, consistently 90 FPS. With mods, there is a noticeable lag, I feel like the FPS range is somewhere between 50-60.) my question is mainly, would upgrading my GPU actually improve my performance as much as I think it would?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.