>>773493I am SO FUCKING SICK OF HEARING ABOUT AI ALREADY. FUCK YOU. but I will engage with your question.
>assume all the things included here are genuine intentions of policies they wish to earnestly attempt to advocate for and implement>What things should OpenAI and other large AI companies do when it comes to compensating for eventual inevitable job loss from AI?100% of earnings after 1 billion are distributed to the government, which then distributes that money to the people. no tax breaks, no kickbacks, no loopholes, no lobbying for revisions of the tax code so they can have said loopholes and tax breaks, no sending money to tax havens, etc.
all of the money you earn goes to the people, after a certain sum is earned, which will take into consideration the cost of operation for the AI in question (which should be significantly lower than any human run company, if it really is this breakthrough technology that can destroy every human efficiency metric)
oh and something something some sort of transparency program that produces publicly available quarterly reports, plus something something the heads of said AI corporation should be appointed by the people, much like an elected public official.
there, that's all I got, but it's far from perfect with glaring flaws, because ultimately, human beings suck and are prone to various imperfections. the real-real answer would be to achieve true superintelligence and let it delegate for itself what ought to happen. assuming it isn't malignant, it will know better than myself or the brightest human minds what sort of policies should be enacted. that's not going to happen either because this tech is farcical and dubious.