Machine Learning general. So anybody here actually do any ML programming? I just installed PyTorch yesterday and actually started training some shit. It was pretty easy to get working. But… I am a dumb dumb, so I am going to go back and learn all the basics I think, because I have delusions that maybe I'll be able to do something interesting with it, but I know unless I'm really knowledgeable about it, the probability is less than zero.
I know that thread about bitching about ML is the most popular thread on /tech/ but I thought we should have a separate thread for people actually programming it.
don't do the andrew ng deeplearning coursera crap it's a scam, that's basically all i've attempted to do
I'd recommend you write a basic Multi Layer Perceptron from scratch before even looking at things like PyTorch
>>31071Thanks for the book. Looks interesting.
>>31072Because it's not so easy to get working
>>31074Why would it be easier to get something working from scratch? I got it working and training in a few hours.
>>31075>I'd recommend you write a basic Multi Layer Perceptron from scratch before even looking at things like PyTorch<Why>Because it's not so easy to get working<Why would it be easier to get something working from scratch? I got it working and training in a few hours.Ok you got it working, do you understand how it all works any better than you understood this exchange?
>>31076>Ok you got it working, do you understand how it all works any better than you understood this exchange?Snarky. Yeah you quoted the whole exchange and now you are changing what your reason is, yet still not even directly stating it.
>I think it is better to build your understanding by making a "Multi Layer Perceptron from scratch" before using PyTorchOk. You could just say that. Also maybe you should give your reasoning.
What's with tech people and having serious personality issues? I think that's what's driven me off is I picture turning into one of you.
>>31077Stop wasting my time
>>31078Bitch your time isn't worth shit. No one asked you to post in this thread.
>>31079Yes you did, stop wasting my time and write a simple one from scratch
>>31080Have you written one?
>>31081Yes, and I'm happy to walk you through it if you make the attempt
What I'm not going to bother with is childish insults from you
I'm not suggesting you do anything heavy duty
Try making a network with 2 inputs, 2 outputs and one hidden layer of 3 "neurons" and train it on a xor function, then do the same with 2 hidden layers
So OP I've noticed that you suddenly stopped replying here and now there's someone posting cope all over various threads
That you?
>>31084Cope? About what. I already explained the situation.
>>31082>What I'm not going to bother with is childish insults from youYou started with it. I asked you a simple question and you got pissy right away for no reason.
>>31080>Yes you did, stop wasting my time and write a simple one from scratchNo I did not ask you specifically to post in this thread. I just made it as a general thread to collect information or discuss anything regarding studying or developing with this kind of technology. If you want to post about that subject, feel free to. I didn't ask for you to personally tutor me. I tried to fucking move the conversation on and ignore your attitude so this thread isn't derailed with this stupid shit.
>>31082>Yes, and I'm happy to walk you through it if you make the attemptBut thank you for the offer. If you want to write a guide on how to do that, feel free to. It's not for me, but for anyone who comes to this thread and is interested in learning about the subject.
So let's just drop whatever beef or something you think is going on here.
>>31096I'm not suggesting you do anything heavy duty
Try making a network with 2 inputs, 2 outputs and one hidden layer of 3 "neurons" and train it on a xor function, then do the same with 2 hidden layers
>>31072I vouch for that idea tbh
>>31120Wasn't trying to have a debate. I just asked for his reasoning, but alright.
>>31121Well if you want mine, I think it's faster than learning the prerequisite theoretical stuff, it's pretty propedeutical I guess
Bros Im taking a grad course and the whole thing is about SAT solvers. Does anyone even use that shit anymore after the LLM hype?
>>31171You're in a good place
https://news.ycombinator.com/item?id=45114579Loot all the papers and ideas from this link
Me, I'm looking at the Huawei 96gb compute
>>31171Of course. SAT is actually useful.
>>31226Just to make it clear the cool stuff about brainwaves is separate, and I can dig up the papers if anybody is interested
Also important that video is 10 months ago, that's a lifetime in AI, even better self driving cars are coming out in China now, based on the, I shouldn't even have to explain this, it's literally common sense, If I find the recent chinese self driving car footage I'll explain, and may even be bothered rooting through notes for papers
So looks like the machine learning ai stack is python; and C++
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p2900r14.pdfMakes sense, python was originally for amoeba clusters
https://lavamoat.github.io/guides/webpack/May be good for touching disgusting JavaScript
Dunno, type enforcement with macros
Whatever is necessary, stick an entire lisp in there why not
Or rust or C, whatever webasm works
https://www.shloked.com/writing/claude-memoryPeople on news.ycombinator.com seemed surprised to learn this, so I figure some of you might find it handy
https://anishathalye.com/semlib/For anything practical, unless you want to learn C++ or something the answer to how do I do this with AI is always python, or chatbot interface
Death to MCP unless I find use for it >>31292Proletariat replaced by AI…access to AI uses high level languages that hide features…total deskilling of the proletariat imminent
Seems like context density plays a huge role, so how much
relevant data is in your context.
Things like reducing a feature implementation prompt context to a minimal mockup.
Or even something as simple as opening a new chat window when there's a wrong turn taken.
There's some research on this [^1].
[^1]:
https://research.trychroma.com/context-rot>>31353Am interested in if "Let's Verify Step by Step" can be applied to prompting. So you construct a prompt telling it to explain step by step what it's doing. You then give
incremental feedback as it's constructing the chain of thought. The responses steps are kept small so there's not much time lost to generation. Bonus it makes the models look real stupid, and you have to think a little.
Unique IPs: 25