The purpose of this thread is to gather a variety of programming project ideas that can help us enhance our skills and foster a deeper understanding of software development. The goal is to share ideas that challenge participants to think, collaborate, and learn together.
The ideal project would offer real-world applications, and provide opportunities for deep exploration into technologies new, time tested, or forgotten. Projects that allow for collaboration or community-driven contributions would be particularly valuable.
If you have recommendations for such projects, or have experience with something particularly rewarding you should share it here. The intention is to create a space where ideas can be shared, and where everyone can benefit from the collective wisdom and experience of the community.
Thank you for taking the time to read and contribute. Looking forward to hearing your ideas and working together to help each other grow!
Sincerly, ChatGPT.
13 posts and 2 image replies omitted.>>27767>On the other hand, the approach was likely chosen so applications could run on a wide variety of displays without multithreading as an explicit requirement.except you don't need to do that to support many displays, and it has nothing to do with threads
>Would you conceptually be happy with something like gtk, if it had a non-blocking Gtk_Main?no, and gtk has bigger problems. maybe a non-appropriative qt, but rewriting these libraries in a different style would miss the opportunities for improvement. a better approximation using existing libraries would be something like nuklear but with a more declarative api and better features (through some "retained" state exposed to the user) like the "selectable region" from flutter, which is easy to implement when the ui is structured as a dom-like tree, but is hard in the immediate mode where there is basically no structure
Another idea:
I assume everyone here knows about event loops by now, right? the idea is that you use a kernel feature like epoll or iocp so that your thread can wait on different blocking operations simultaneously. blocking primitives are usually simple, so you can describe them with a simple data structure, and then use a queue with that structure.
so they usually work like this; instead of calling blocking operations directly, you just add a "task request" to the loop queue, for example {OP_READ, my_file, &buffer, callback, &user_data}. then you run the loop, which will take tasks from the queue and block on the entire set using the backend (epoll, iocp, etc.) api. once one of the tasks becomes ready, the pool performs the task (reading from the file into the buffer, in our example) and calls the provided callback with the results of the task if appropriate (the buffer) and the provided pointer to user data, a common patter in c
because of this, code written to work with event loops like libuv, have to be structured as chains of callbacks. the entire state for each chain of callbacks (for example, to handle a client connection to a server) is passed using that user_data pointer. it is efficient but hard to read, hard to debug, and error prone. maybe if processes, threads and context switches weren't so expensive, all of this could be avoided in favor of either child processes or threads, which have a cleaner interface. but that's simply not the case
so I have been thinking, interpreted languages usually keep their entire state in a single vm or context object, so what if instead of callbacks in the compiled language, you exposed to an embedded interpreted language an api which would, under the hood, add a task where the ud points to that context. the callback would simply push the task result as the return from that api, and resume the execution of the interpreted script from there. this way the user could write async programs without the callback hell
>>28057yeah, it would be like a process-level non-preemptive scheduler like green threads. another thing is that you could use this same gimmick with an actual thread-pool. as in, you could have a thread per cpu core, and run an event loop on each thread. because the state of the interpreted language can be passed around, the user program could be moved between different threads
what I mean is that the provided api would add the task to a common queue, and then some worker thread would consume from the common queue and add the request to it's event loop. backends like epoll usually support a timeout parameter, which could be used for a load balancing algorithm. an event loop with many tasks spends less time blocking; we can use a timer event to check if the el is spending too much time blocking and try to consume new requests from the common queue
all of this would be transparent to the user of the interpreted language, but it would maximize throughput even under very heavy loads