A CHARMING HISTORICAL COZY captures the feel of a small town in Californiaβs Sierra Nevada lurching reluctantly into modernity in 1921. Characterization shines, and the plot piles intrigue upon intrigue. B PLUS
barnesandnoble.com/w/the-case-β¦
#book #Books #bookreview #bookreviews #fiction #novel #novels #mystery #mysteries #historicalfiction #historicalmysteries #CozyMysteries #SelfPublishedSunday
bookstodon group reshared this.
Population Change in % of Every Belarusian District from 2024 to 2025
Here is the source I used to obtain the data: https://www.belstat.gov.by/upload/iblock/e1d/fw6q0hxkv3wmuqsqthztn49vpszatpj5.pdfBYTESEU (Bytes Europe)
To solve the mystery of this shower cabin
In a sense, this can do this. When you enter, you put your foot in front of you and then you pull the other behind. And since it goes wider and wider with aBYTESEU (Bytes Europe)
Rodney Taylor is a disabled double amputee suffering extreme medical neglect at an ICE detention facility in Georgia.
The reason he's detained? "A burglary conviction he received as a teen & which Georgia pardoned him for in 2010"
When ICE took him, he was 2 days away from receiving new prosthetics.
Now he's forced to wear shoes which are too tight, causing extreme pain, and his prosthetic legs require eight hours of charging which the facility almost never provides.
They offered him a wheelchair, without assessing whether he could actually push it (he can't).
He's gone without food, he's got increasingly severe hip issues and his request for a medical leave to be assessed for new prosthetics was denied.
This is cruel and inhumane, and is being made easier because the Regime closed the office for civil rights and civil liberties (CRCL) and the office of the immigration detention ombudsman (Oido). They were critical agencies meant to protect people in custody from exactly this type of neglect.
We already know disabled people have died at the hands of ICE. There are no doubt countless more who will never have their stories told.
These practices must be stopped. No one should be treated this way, regardless of citizenship, race, health status or any other metric No one deserves this.
Rodney has been speaking out in an attempt to raise awareness of his own plight, as well as the suffering of others being held in the same facility.
Alma Bowman, another disabled detainee, spoke out during the last Trump admin and has found herself in custody again.
These cruel practices must end.
We must protect those who speak out from retaliation.
We must stop debating the worth of individuals.
We must establish strong networks of community care and neighbourhood ICE watches to protect our friends and neighbours from a similar fate
#uspol #ice #immigration #fascism #disability #ableism #eugenics
ππ¬οΈRobot DiverπβοΈ reshared this.
Colorado measles outbreak: 3 more cases tied to Turkish Airlines flight that landed at DIA
coloradosun.com/2025/06/01/colβ¦
Colorado measles outbreak: 3 more cases tied to Turkish Airlines flight that landed at DIA
The new cases bring the total tied to Turkish Airlines flight 201 to six β meaning it meets the official definition of an outbreakJohn Ingold (The Colorado Sun)
In the #Fediverse there are now
564 media accounts federated by @Flipboard.
362 were active today.
Some accounts, that were active today are
@instyle
@MensHealth
@AssociatedPress
@RealSimple
@PopularScience
Find the whole list on:
β‘οΈ fingolas.eu/fediverse/overviewβ¦
Built by @mho
#MastodonMigration #SocialMedia #Mastodon #Media #Press #Newspaper #TwitterMigration #Newstodon
The lost historical opportunity of Albanians
I believe that it is precisely through the presence of opposites and the feelings that they occasion that the great man, the bow with the great tension,BYTESEU (Bytes Europe)
Sensitive content
Gyudon #Dinner tonight
Just a quick sear, slight char on onions before dashi and meat went in. Skim, then mirin, sake, pinch of sugar, then soy. "Shabu shabu" cut beef wasn't as thin or marbled as I would have liked so had to simmer much longer, good soy to finish.
US eggs, so not raw topping, still terrific with home pickled onions and store pickled ginger (didn't have beni shoga) over rice.
Simple, satisfying
Students have been generative AI's most enthusiastic early adopters, says Nicholas Carr, with nearly 90% of college students and more than 50% of high school students regularly using chatbots for schoolwork.
"Because text-generating bots like ChatGPT offer an easy way to cheat on papers and other assignments, students' embrace of the technology has stirred uneasiness, and sometimes despair, among educators." "But cheating is a symptom of a deeper, more insidious problem. The real threat AI poses to education isn't that it encourages cheating. It's that it discourages learning."
Before we continue, let me say that yes, I am aware Nicholas Carr is considered a "Luddite". However, let's continue anyway.
He says that when a task is automated, human skill either increases, atrophies, or never develops. At first, this sounds like saying the stock market will either go up or down, in which case it's impossible to be wrong because you've covered all the possibilities. (The stock market is pretty unlikely to stay the same down to the penny.)
But on closer inspection, he's making a more nuanced argument. When someone is already an expert, task automation frees them up to learn more challenging concepts. When someone is not an expert, task automation will lead to atrophy. And when someone has not learned how to do a task in the first place, task automation prevents learning the task in the first place.
A simple example might be calculators. Throughout most of human history, mathematicians did not have calculators. Having a calculator bestowed on one from a time traveler from the future would free an expert mathematician to develop more advanced mathematical concepts. But there are probably many people alive today who learned how to calculate by hand but let the skill atrophy because they always use calculators, and there are probably many people alive today who never learned how to do arithmetic by hand at all. That missing foundation may impair their ability to learn more advanced mathematics.
Ok, so far so good -- I'm following his line of reasoning. He quotes Clay Shirky, who apparently in the years since I read his book (Here Comes Everybody, published in 2008) has managed to get himself promoted to the prestigious title of "Vice Provost for AI and Technology in Education at New York University", as saying, while the "output" of school is papers, exams, research projects, and so on, the "product" is student "experience" of learning.
"The utility of written assignments relies on two assumptions: The first is that to write about something, the student has to understand the subject and organize their thoughts. The second is that grading student writing amounts to assessing the effort and thought that went into it. At the end of 2022, the logic of this proposition -- never ironclad -- began to fall apart completely. The writing a student produces and the experience they have can now be decoupled as easily as typing a prompt, which means that grading student writing might now be unrelated to assessing what the student has learned to comprehend or express."
That's Clay Shirky. Getting back to Nicholas Carr, he talks about how AI produces "the illusion of learning"
"An extensive 2024 University of Pennsylvania study of the effects of AI on high-school math students found, as its authors write in a forthcoming PNAS article, that 'access to GPT-4 significantly improves performance [as measured by grades],' but when access to the technology is taken away, 'students actually perform worse than those who never had access.'"
"An ironic consequence of the loss of learning is that it prevents students from using AI adeptly. Writing a good prompt requires an understanding of the subject being explored."
I'll come back to that last bit.
Ok, I think that conveys the gist of the piece. Now for my commentary. Which I guessing a lot of you won't like, but here goes.
One of the super hard, super painful lessons of my life was that the purpose of school isn't learning. The purpose of school is grades. If I'm in a class where I'm not learning anything, and nobody is learning anything (easy to ascertain just by asking fellow students), the correct course of action is not to complain. Shut up, do what you're told, get your "A", and go on with your life. If you complain, if you treat the lack of learning like a "problem" that needs to be solved, a lot of bad things happen, and zero good things happen. I know because when I was in school, I actually ran this experiment. You get labeled "disobedient" and a "troublemaker". The teacher will tell other teachers what a "disobedient" student you are and what a "troublemaker" you are, so before you even walk into any other class at the university, they will already have heard about you -- in a negative way. The administration will hear about how "disobedient" you are and what a "troublemaker" you are. Your fellow students, who understand full well that they are there to get "A"s and if they learn a lot, that's great, but if not, it's ok to sacrifice learning to get "A"s rather than the other way around -- the "A"s are what ultimately matters -- will ostracize you, because they don't want to get on the teacher's "enemies" list. The teacher will take you telling them that they are incompetent at their job personally, and will make it their mission in life to destroy you. Questioning their authority is not allowed.
It's taken me years to make sense of this, but the way I've made sense of it is in terms of intrinsic vs extrinsic motivation. "Learning" is an intrinsic motivation. "Grades" is an extrinsic motivation. The entire educational system in this country, and just about every other, everywhere in the world, is built on the assumption that students are motivated by grades. Therefore the system requires extrinsic motivation. I've come to think of intrinsic motivation as a separate dimension of personality, so you can plot "intrinsic" and "extrinsic" on 2 axes. At the Boulder Hackerspace, everyone who I met there was there because of intrinsic motivation -- people go to hackerspaces to build their own projects, whatever they're curious about doing. But some people had advanced degrees, which requires a lot of extrinsic motivation. So I think it's possible for a person to be high on both intrinsic and extrinsic motivation. The key thing to understand is that the educational system is indifferent to intrinsic motivation -- all it cares about is extrinsic motivation, that students are motivated by grades. The best students are students who treat school like a "game" -- like a giant, real-life video game where the goal is to get the high score. How do you get the high score? In the context of education, a high score is a high GPA, in a prestigious major, from a prestigious school.
If you were to ask me now, I would say the purpose of school is sales. You get a degree so that when you send in your rΓ©sumΓ© for a job, people go "Oh my god, you have XYZ degree from XYZ school! We have to hire you right now!" Normally when you send in your rΓ©sumΓ©, people are like, "Bartholomew Anoplipsqueroidi? Who is Bartholomew Anoplipsqueroidi?" or whatever your name is. But if you have "University of Colorado at Boulder" attached to your name -- or better yet, MIT, Stanford, Harvard, etc -- now people will be like, "I've heard of CU Boulder (or MIT, Stanford, Harvard, etc)!" That's why you get a degree -- it's to attach a famous brand name to yourself. And then use that to "sell yourself" on the job market. (Maybe people who need this explained explicitly are autistic or something? -- normal people seem to understand automatically that the purpose of school is not learning, but maybe it's helpful to pretend the purpose of school is learning for sales purposes, and that if you follow the extrinsic rewards backward from money to job offers to degrees to to GPAs to "A"s in specific classes, it all makes perfect sense.)
You might think employers would care about the actual learning, but I realized afterward, no employer is going to go through your transcript and inquire as to whether you learned all the concepts on the syllabus for each course. All they care about is: degree or no degree? And maybe they care about your GPA for the first few jobs. For them, it's a fast way to sift through a pile of rΓ©sumΓ©s. I don't think employers care about intrinsic vs extrinsic motivation -- for them, it's probably fine for people to be money-motivated (extrinsic motivation) because that gives the employer a lever of control. I once saw an interview with an economist on YouTube. I wish I had the link handy, but I seem to have lost track of it. Anyway, he said, a college degree signals three things to employers: 1. That the person is smart, 2. That the person is hard-working, and 3. That the person is "conformist". I would probably have been less charitable and used the word "obedient" instead of "conformist" because I got hammered with the "disobedient" label so much. But the fact that I was "disobedient" and denied a college degree on that basis is, perhaps, a correct assessment: if a person is "disobedient" that person is genuinely not wanted by employers, who want "obedient" employees, and so "disobedient" peolpe should be filtered out. So ultimately the university did the correct thing, though I didn't understand it at the time. It would have been in error for the educational system to certify me as sufficiently obedient for employers, when in reality I wasn't.
Ok, so, two things. First, the purpose of school is not learning. That's the first mistake Nicholas Carr makes throughout his piece. The second is: Shouldn't students be learning to use AI? I'm currently employed and in the workforce (I've sufficiently gotten it through my thick skull that I must be obedient -- I'm obedient enough to get by), and I'm reminded on a fairly regular basis these days that I'm not doing a good job of 5x-10xing my productivity using AI. I'm supposed to fix software bugs 5x-10x faster. I'm supposed to implement new software features 5x-10x faster. Claude just came out with Claude 4 and it's supposed to be able to handle enormous context windows without "forgetting" content in it like large-context-window models typically do, and this is supposed to help tremendously with getting AI agents to make changes in a large, existing codebase. So somehow I've got to set aside time for learning Claude Code with this new model and how to get AI to do the work of multiple software engineers. If this is what a typical workplace is like now, shouldn't young people be learning how to do exactly this?
It makes me think we should abandon the idea that using AI is "cheating", and make the assignments so hard the only way they can be done is with AI assistance, to make school assignments more like the workplace we are preparing students to enter (supposedly). One simple way to do this could simply be time. Instead of having a writing assignment issued on Monday and due, say, by midnight on Sunday (or whatever deadline is typical of work submitted online these days), make it so the assignment is issued on Monday at 10:00am and the first 50% will be graded and the second 50% will all be automatic "F"s. If the first 50% are all submitted by noon on Monday -- assisted by AI, of course -- then all the students who even attempt the assignment without AI will automatically fail. This will motivate students to invest heavily in learning how to prompt AI systems -- "prompt engineering" (lol, that term still seems ridiculous) as it's now called.
(In reality, no school today would ever give 50% of students failing grades in any class -- in fact the opposite phenomena, grade inflation, is happening. Grade inflation is when average grades throughout the country go up, but average scores on standardized tests don't budge at all. Since we're looking at averages, we can't pin blame on any particular school, teacher, or student, but we can see that incentives align throughout society for higher grades to be given for less learning. But grade inflation is its whole own topic for some other time.)
Never mind the question of how all these AI-generated assignment submissions would be graded (maybe AI-graded, too? lol).
That leads me full circle back to the bit I said I'd come back to.
"An ironic consequence of the loss of learning is that it prevents students from using AI adeptly. Writing a good prompt requires an understanding of the subject being explored."
Hmm. Assuming this is true, and it does seem reasonable that it would be true, this seems like quite a dilemma. Does motivating students to become expert AI prompters provide sufficient motivation to motivate them to learn underlying fundamentals? Or does the process simply fail here, and AI-less learning of fundamentals has to be necessary? Or should the school system simply take a sink-or-swim approach: give good grades to the best AI-generated work, irrespective of how it was accomplished? Let students fend for themselves to figure out how to properly prompt AI? That's how the world of work is today, so maybe it makes sense for school to work the same way? What do you all think?
I think we all know, it's just a matter of time before AI automates all jobs. I don't know how long it will take. But there's only X years of jobs left, for some X, and young people need to learn how to maximize their income in the labor market while the labor market exists. Ultimately, everyone will have to find non-labor sources of income because the labor market will go away. What should young people be learning for X years?
Actually, it's X - S, if S is the number of years the person will remain in school.
For example, if we assume X is 20 -- I find it hard to imagine it will take longer than 20 years for the entire job market to be automated, but the predictions that it might be right around the corner might be premature and it might take a full 20 -- then if a person is graduating now, this year, S = 0 and the person will be in the labor market for 20 years. (Might not be that easy -- AI layoffs appear to have already started -- but let's assume our hypothetical graduate will be able to stay in the workforce right up to the very end.)
If the young person is starting college now, then S = 4, so X - S = 16, so the person will be in the workforce for 16 years. If they are starting high school, then S = 8, so X - S = 12, so the person will be in the workforce for 12 years. If the person is starting elementary school, well, the "K12" designation right there tells you S = 12, or 13 if you include the "K", so X - S = 8 or 7. So a person starting elementary school now will have 8 years in the labor market, and someone starting kindergarten will have 7 years. A child born this year will have S = 22, which makes X - S a negative number (-2), which means the labor market will be gone 2 years before they can graduate college.
When you spell it out like that, it actually raises the question of whether young people should be in school at all. Maybe they should be trying and failing to start businesses using AI, so by the time they become adults, they will be owners of profitable businesses generating revenue from products and services generated by AI?
There's also the issue that for young people with time left to be in the labor force, most of the jobs aren't going to be "white-collar" college-degree-type jobs. They're going to be jobs like cleaning hotel rooms. I always think of cleaning hotel rooms because, somewhere around 20 years ago (I don't remember exactly when, but circa ~2005 seems about right), someone got the idea of remote-controlling a robot to clean a room. Even with the motors and actuators of the day (which were worse than those that exist today), the human (who was actually a grad student in a nearby building) was able to clean the room by remote-controlling the robot, and neither the robot nor the human had any additional assistance. (Of course it took a lot longer than it would take a human to walk into the room and clean it, but...) This proved that the missing component preventing robots that clean hotel rooms from existing wasn't any motors or actuators or any physical robotics technology, it was intelligence. (And this is still the state of affairs today -- AI for robotics lags behind AI that generates language, sound, and images.)
I always notice today when I hear people say things like, AI is wiping out the low end of the IQ spectrum, or wiping out the middle, and you will have to be super high IQ in the future to get a job. Well, "IQ" does not mean what you think it means, if we're judging "IQ" by what's easy or hard to automate. If we're judging "IQ" by what's easy or hard to automate, then people cleaning hotel rooms come out looking like geniuses. As a society, we are not used to the idea that mathematicians are dumb and people who clean hotel rooms are geniuses. From where we are right now, it looks like the job of a professional mathematician will probably be easier to automate than the job of a person cleaning hotel rooms. Maybe the next popular mantra after "Learn to code" will be "Learn to clean hotel rooms"?
The myth of automated learning
#solidstatelife #ai #genai #agi #technologicalunemployment #aisafety #aiethics #education
like this
Will, Robert Biloute - on diaspora-fr.org, Joachim, balduin and Rhysy like this.
Interesting commentary. I didn't experience the obedience conflict that you did. And i'm nowhere near as cynical about the lack of intrinsic motivation in the current educational institutions.
But, I've recently learned that Albert Einstein was an extremely independent and disobedient student. So you are in good company.
As far as the question of how will human's adapt to learning in the age of AI, I think we're in a big transition period, and consequently impossible to predict from the evidence we see now.
Humans obviously have a strong survival instinct so the students of today will figure a lot of it out ok. We older people can't very well guess what it will take for a now 16 year old to survive and thrive in a high tech world. But with 8 billion people in the world, there'll always be some who thrive and some won't.
I didnβt experience the obedience conflict that you did. And iβm nowhere near as cynical about the lack of intrinsic motivation in the current educational institutions.
Doesn't that just mean you're normal? If your extrinsic motivation levels are above the minimum threshold, and normal people (in the middle of the bell curve) are above the threshold, then you wouldn't have a problem, right? I've spent decades trying to figure out why I failed in school, and I think having abnormally low extrinsic motivation levels is part of the answer. I've never experienced life as anyone but me but I'm guessing normal people have much higher extrinsic motivation levels. And from the perspective of the school institution, if students can be controlled with rewards, you don't need the punishment. If the carrots work, you don't need the sticks. For me, the carrots didn't work, so they had to bring out the sticks. I was a bad person who had to be punished.
Iβve recently learned that Albert Einstein was an extremely independent and disobedient student. So you are in good company.
lol. Maybe we both had obedience problems, but that's where we part company. I'm nowhere near as smart as Einstein.
balduin likes this.
Humans obviously have a strong survival instinct so the students of today will figure a lot of it out ok. We older people can't very well guess what it will take for a now 16 year old to survive and thrive in a high tech world. But with 8 billion people in the world, there'll always be some who thrive and some won't.
I disagree. I think the majority of people will fail to make the transition from "labor income" to "non-labor income".
If AI automates all jobs, then all humans who currently survive on labor income (directly or indirectly -- many people such as children don't participate in the labor market, but survive based on income from others who do) will have to find a non-labor source of income.
This can be ownership in a business -- and if the income is not passive, when it is automated by AI the person can retain the income produced by the AI. It could be a franchise rather than a new venture. It can be partial ownership of a business in the form of stocks, which can pay dividends or appreciate in value. There's countless additional variations such as real estate investment trusts (REITs), mutual funds and exchange-traded funds (ETFs). "Private equity" can do something analogous to this with non-publicly traded stocks, from what I understand. There's also being a venture capitalist.
There's all kind s of online businesses including content creation (YouTube, podcasts, newsletters), affiliate marketing, advertising revenue from blogs/websites, online course sales, or other content that can be delivered as ebooks, etc, software sales (all those mobile apps you pay a few dollars for), online subscription services, dropshipping businesses (all those ads you see of Chinese goods you can order through a website you never heard of instead of Amazon, etc -- we'll see how much of this continues with tariffs), print-on-demand t-shirts and other merchandise, vending machines in the physical world (or laundromats, the stereotype of Asian immigrants). I hear people own so much junk these days that self-storage facilities are a pretty lucrative business.
Non-business assets such as commodities can appreciate in value. "Collectibles" such as art can be considered a variation on commodities trading.
There's active trading of stocks, bonds, forex (currency exchanges), and there's option contracts on stocks, commodities, bonds, forex, and so on. For the truly daring there's cryptocurrency trading. Some cryptocurrencies you can make money by staking.
There's interest income from bonds, private loans, peer-to-peer lending, and so on. There's high-yield savings accounts and annuities and so on.
There renting or leasing physical assets -- real estate (including Airbnb!), farmland, cars, boats, planes, farm equipment, machinery, etc. One can receive payments for the extraction of minerals, oil, gas, or timber from land if one is the owner of the land.
There's royalties and all manner of "intellectual property" licensing. There's royalties from music, films, TV, books, photography, art, patents, trademarks, software, brand identity, designs, and more. Semiconductor companies license "IP cores" -- everything from microprocessor architectures to memory and peripheral controllers.
The thing is, when we talk about labor income going away, people always start talking about universal basic income (UBI). Is it really necessary to have UBI, or can we just get all of humanity surviving on non-labor sources of income? There seem to be quite a lot of them so it ought to be possible to get everyone on non-labor income -- or not?
The reason I think the answer is that most people will fail to make the transition from labor income to non-labor income is that there is a "gotcha": all non-labor income requires some form of "property". And that property has be be valuable to be income-generating.
I realize we're stretching the word "property" a bit here to include some non-physical things like a brand identity. But the vast majority of humans own little in the way of property -- physical or intellectual or brand identity or anything else. For the vast majority of humans, their time and ability to trade that time for work is the thing that enables them to get the money they need to survive.
Let's put some numbers on this. The US federal government puts the official poverty line at $15,650 per year per person. Let's suppose one can reliable generate 10% above the inflation level from assets. Most people actually can't do this, but never mind that for now. (Most people can't actually survive on $15,650 per year, either, but never mind that, also, for now.) That means the necessary assets to own must be valued at $156,500 per person.
I don't think UBI will ever happen, but I know lots of you will tell me that it will, that it may be politically infeasible at this moment but that will change once the unemployment rate hits a certain threshold. Actually the number to watch isn't the unemployment rate, it's the total labor force participation rate -- because when people go back to school or go into some training program or do anything but actively look for work, they are not counted in the unemployment rate. The total labor force participation rate peaked in the year 2000 -- in the dot-com bubble -- right before the dot-com bust.
In general, some dates to keep in mind are:
Life expectancy (in the US) peaked in the year 2014, life expectancy for people without college degrees peaked in 2010, the total labor force participation rate peaked in 2000, and the fertility rate peaked in 1971. Actually the 1971 date is global -- in the US, because there was this "baby boom" phenomena" immediately following WWII, the fertility rate actually peaked earlier, in 1957. But 1971 seems to be the earliest date the decline that we're seeing worldwide in fertility first became noticeable, and the place where it first became noticable was Japan.
So, my hypothesis is as follows: When machines maximally complement human beings, you get a boost in all these things: you get a boost in fertility, a boost in employment, a boost in life expectancy. When machines compete against human beings, you see declines in all these things: first you see a decline in fertility, then a decline in employment -- but that shows up in the total labor force participation rate, not the unemployment rate -- and then a decline in life expectancy.
Europe saw a fertility boom in 1300s -- which contributed to the European populations moving to the Americas in the couple hundred of years that followed -- due to a bunch of agricultural "inventions" in Europe, including: the 3-field system of crop rotation (one part spring, one part autumn, one part left fallow), heavy plows and the moldboard plow (actually invented in China and migrated to Europe) that enabled crops to be grown in less fertile soil, horse collars and harnesses and horseshoes, watermills and windmills for grinding grain and other mechanical processes, selective breeding of livestock and plant crops, irrigation and drainage techniques, improved composting, and iron tools for reaping and threshing that replaced weaker and less durable wooden tools.
This was later followed during the industrial revolution with the invention of the Haber-Bosch process for nitrogen fixation, exploding the availability of synthetic fertilizer, the invention of chemical pesticides, breeding of especially high-yield grains and rice, and mechanized irrigation.
Now we find outselves in a situation where, despite all these inventions, fertility is declining everywhere in the world, and in many places is already below replacement rate. I've heard that in South Korea, in 3 generations, the population will be about 5% of what it is today. And now we see total labor force participation going down and the ultimate of all measurements of human well-being, life expectancy, also going down -- though not everywhere (yet?). It's interesting that technology complemented humanity throughout the agricultural and industrial revolutions and it was only the invention of electronic brains where we start to see competition showing up in the form of declining fertility. And it shows up early than you might expect -- in 1971. You could, though, attribute that to something else like the invention of birth control.
I know there are currently a variety of government welfare programs, and there are people who survive on that income. So it's not totally outlandish to see this as a mode of survival that people could occupy. I just have a hard time seeing how it can scale up to the numbers required, if it really happens that the labor market gets automated away completely. We're already in a situation where the US federal government is paying more in interest payments on the debt than the military, and the national debt is still going up. What's the solution, Elon Musk and his chainsaw? To have UBI, we'd have to increase the outflow of capital and thus the debt -- and interest on the debt -- even more. How does the math work on this? I don't think it does.
It's interesting how Ray Kurzweil so accurately and presciently nailed the exponential growth of information technology -- yet was so profoundly wrong on everything outside that domain. He predicted, for example, exponential growth in computer display technology and that by 2010, we'd all have glasses that project images directly onto our retinas with lasers -- that didn't happen. I saw an interview of him on YouTube last week and he was saying the same things that he always says -- that we'll "merge with machines" and it'll be a utopia, that technology makes everything cheaper and cheaper and life gets better and better -- so obviously wrong as I've described above yet Kurzweil seems stuck like a broken record on his old ways of thinking.
If you're thinking, humans will just go back to surviving the old-fashioned way, by subsistence farming, in the modern world, going back to what I said earlier and the need to own property for non-labor income, even in the case of subsistence farming, where you don't need monetary income to trade for food at the grocery store, you still need the actual farmland as property.
There are some communities like the Amish that survive in this mode. It's not impossible. But it does require owning the farmland. It's hard to see the majority of people switching to "off-grid living" and becoming their own farmer. Perhaps, though, in 500 years, we'll see a world with an advanced economy where all the economic activity is AI agents trading with each other, and humans survive outside that economy as subsistence farmers, much as how other wildlife on this planet survives outside the human economy right now. Humans will become "just another species" that survives in a subsistence mode while AI agents run the planet.
So that's how I see the situation. I've been sitting on the OpenAI universal basic income experiment research paper for about 9 and a half months now and still haven't said anything to you all about it. Mainly that's just because of time -- it's 147 pages. My impression skimming it over is that they expected their UBI experiment to show a lot of benefits and improvements in the lives of the recipients, but in actuality, it didn't have much effect.
All this is predicated on the assumption AI will automate all work. We're clearly not at that point yet. Language models, for all their wonders, still can't do all language tasks -- I'm a software developer, and LLMs can't take over my job -- not yet, anyway. I guess they have succeeded at obsoleting translators who work with written text. Diffusion models haven't entirely obsoleted artists yet, though I guess they have for what we call commercial "stock art". And these models have so not been useful in robotics.
We're still a long way from a robot that can clean a hotel room. Robotics and the ability to interact with the physical world lags behind purely abstract mental tasks like generating language or generating images.
What will happen when people don't have money to buy things because all jobs have been automated? Won't the economy collapse without consumers with money to buy things? According to this video, already, right now, the wealthiest 10% of households by income already account for more than 50% of consumer spending. Let me say that again. Already, right now, half off all consumer purchases are made by only 10% of the population.
So businesses are already shifting their product and service offerings from "the masses" to "cater exclusively to other businesses or wealthy asset owners who make their income from investment returns rather than exchanging their hours for dollars." This trend has been increasing since the 90s, and especially since about 2020.
Obviously if you're still exchanging your hours for dollars, you need to get out of that position.
In the future, we'll also have a lot of AI agents as consumers. Already, as I've commented before, already we have AI agents making trading decisions on the stock market. Any entity with enough intelligence can make trading decisions can make trades just like humans, so it's not hard to imagine an economic system where most of the products and services are produced by AI and most of the buyers and consumers of those products and services are also AI. There's nothing in the laws of physics that dictates humans must remain in the loop.
So, long story short, I don't think "strong survival instinct" will be enough for all the planet's current 16 year olds to get up and use AI to start a business or somesuch and overcome the obsolescence of the job market. If it's really true that machines become smarter than humans, then machines will be smarter than humans, period full stop. Just as humans, being smarter than other species, are able to appropriate the resource flows that go to other species and cause many of them to go extinct (and cause many that don't go extinct to significantly decline in population), so machines smarter than humans will appropriate resource flows that previously flowed to humans. The human population will decrease, something already foreshadowed in declining fertility rates.
I think machine intelligence is a fundamental phase change, not more of the same of "survival instinct" and young people figuring out new ways to "survive and thrive".
Ok, I guess back to you to explain why I'm completely wrong and today's 16-year-olds will figure out how to start businesses with AI and somesuch and "survive and thrive" in the decades ahead.
So, my hypothesis is as follows: When machines maximally complement human beings, you get a boost in all these things: you get a boost in fertility, a boost in employment, a boost in life expectancy. When machines compete against human beings, you see declines in all these things
Well, you do make a good case for a likely dystopian future. But I think there are too many variables and unknowns to be sure of anything. I don't understand macro-economics well enough to judge your predictions about income as a source of survival. One thing I don't see as connected is the fertility rate.
Also, the industrial revolution brought machines that competed with human muscle power and that didn't bring the declines. Now the machines compete with brain power which may be very different, but maybe not.
I do agree that we seem to be entering a phase transition, but we have to be careful that we're not blinded by the novelty and hype of it all.
I'm reminded of another major variable affecting human adaptation to ubiquitous AI. Recently I skimmed through an interview with Dennis Hassabis of deepmind fame who pointed out that current AI has little hope of dealing with biological emotion and motivation, human feelings, human contact.
That is probably more significant than it is currently treated in the ongoing hype of current AI.
And then there is this, LLM induced insanity:
404media.co/pro-ai-subreddit-bβ¦
Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions
βAI is rizzing them up in a very unhealthy way at the moment.βEmanuel Maiberg (404 Media)
And speaking of chatbot induced insanity, What in the world is this stuff Molly White has been posting about "the chatbot that rates men βsubhumanβ, encourages dramatic surgeries, and repeats incel beliefs about how women are βunfairβ and βhypergamousβ, OpenAI has chosen to leave it available and prominently featured on their shared GPTs page."
First i had heard about this, so i hope this isn't just molly white going weird.
One thing I donβt see as connected is the fertility rate.
Here's how I think of it. I use a concept called "cost of children", though I should probably think of a better term, as we tend to think of "cost" as measured in "dollars", but "cost of children" is measured in time -- it's the length of time from birth to when a child can support themselves economically, and place no more financial burden on the parents. And I guess it is a cost in dollars because you can integrate over time and come up with a dollar total that it costs the parents to support the kid.
If you read about fertility rates, one of the first things you'll come across is that the biggest factor is "urbanization", with various other factors having an effect. So you might think, ok, there's no connection between technology or AI with fertility because the primary driver of decreasing fertility rates is "urbanization", the movement of people from rural areas to cities.
But, that's just the surface appearance, if you scratch below the surface, I think you'll see they're connected.
When the US became a country in 1776, more than 80% of the population lived and worked on farms. My understanding is that a child became self-sufficient typically between the ages of 6 and 9. That is to say, once a kid was somewhere between 6 and 9 (depending on the kid and how big and strong they were), they could do enough farm work to produce their own food, or the economic equivalent. Families could treat children as extra farm hands. Big families, with 8, 9, even 10 kids were not that uncommon.
Fertility rates didn't drop as fast as you might think when people first moved to cities because kids got work in factories. But if you look around at the world of today, factory work tends to be high-skilled work, often requiring college degrees, and any factory work that isn't skilled tends to be offshored to low-wage countries. Even the low-wage countries are tending to increase their use of advanced technology in manufacturing. China used to be a low-wage country, but now uses a lot of robotics in its manufacturing.
Anecdotally, it seems like in my parents' generation, a man with only a high school degree could buy and house and support a wife and a kid or two. People of my generation could buy a house and support a kid or two with both spouses working, with only a high school diploma, but it would be difficult. People of the generation after me (millennial) and after them (gen alpha) can't realistically buy a house and support any children, even if both parents work, with only high school diplomas.
If more than high school diplomas are required, that means the "cost of children" goes up, because it means people have to be supported by their parents for longer because they are in school longer. And with a "high school diploma" we're already talking about age 18 -- way beyond the age 6-9 that we started with for kids working on farms in the pre-mechanized era.
So you see "urbanization" listed as a primary cause of decreasing fertility rates, but what is really behind it is the advancement of technology -- it is the mechanization of farm labor that drives people off farms, and it is the advanced technology used in manufacturing (and now service jobs) that drives up the skill requirements, increases the time people have to spend in school, drives up the "cost of children", and drives down fertility rates.
Beyond anecdotes, I believe there is actual economic data to back up what I've said above. Median age of first-time home buyer in 1991: 28 years old. Median age of first-time home buyer in 2024: 38 years old. Trending upward. Most of that upward trend has been since after 2008, interestingly enough.
resiclubanalytics.com/p/the-vaβ¦
Cost of education is trending upward:
bestcolleges.com/research/collβ¦
Looks like covid caused a little dip but it has resumed its upward trajectory.
I think it all fits together. When technology maximally complements humans, like it did during the time after the invention of farm technologies like heavy plows, improved irrigation, composting, iron tools for reaping and threshing, etc, all the stuff I listed above, but before the mechanization of farm work with petroleum-powered machines and the invention of the computer and its automation of mental, rather than physical labor, human fertility goes up. When technology competes against humans in the labor market, either through mechanizing farm labor or automating the mental labor of urban jobs, fertility goes down.
Now, in my comments above, I've been making the assumption that full automation of all jobs will be possible over a 20-year time span. If that's true -- and I can't guarantee that it is, but I would be very surprised if it doesn't turn out to be true -- then in essence, the "cost of children" will increase to infinity -- there won't be any amount of time that enables a child to become economically self-sufficient. At least not through labor. Children that can be bestowed trust funds or other investments that generate money they can live off of will be an exception. If the much-talked-about universal basic income becomes a reality, then that will change the equation -- children will become self-sufficient upon qualification for UBI. But as I've explained before, I think UBI will never happen. You're welcome to your own opinion on that, I suppose. It would be nice if I could paint an optimistic picture of the future, but it wouldn't be honest because I don't believe that. We currently live in a time of downward trends in indications of well-being despite rapid technological progress. I predict that will continue.
Cost of College Over Time
College costs have surpassed inflation increases over the last decade. Find out what's driving the price hikes, and discover just how much costs have risen over time.Jessica Bryant (Best Colleges)
Ok, i can follow that logic.
When technology competes against humans in the labor market, either through mechanizing farm labor or automating the mental labor of urban jobs, fertility goes down.
What looks like apathy is often risk management.
What looks like disengagement is a cost-benefit calculation.
I wrote about the collapse of civic belief - and why cynicism isnβt a crisis of morality.
Itβs a failure of systems.
Mastodon Migration reshared this.
βοΈ How does Joan Westenberg keep nailing it?
"Cynicism is not a pathology, or a character flaw. Itβs an entirely rational response to a political economy that has stopped delivering meaningful returns."
1/2
> And the secret the cynics miss: the house wins, yes. But only if you keep playing the same game.
That's one hell of a closing line. Thank you for writing.
Other replies already pointed out how good the article is.
So, I will just focus on something small. The below is an excellent mental image invoking the precariousness of the situation:
"institutions [...] greenlight tech monopolies with all the foresight of a toddler playing Jenga"
"Adaptive" doesn't mean good. Stealing others' bread rations in the time of great famine is very "adaptive." It's also morally reprehensible, and literally no-one will miss you when you're hanged for it.
In politics, it's the same. Weathervanes are the most adaptive things in the world. It's no coincidence, that the word "weathervane" ("ΡΠ»ΡΠ³Π΅Ρ") is often used in Russia to describe the Kremlin propagandists.
Cynicism is the next to last kind of ideology you want to be praising.
What are you listening to this Monday?
It's Always Monday On The Moon
~ #Dgar
ffm.to/dgaritsalwaysmondayonthβ¦
Dgar reshared this.
Cable Ties.
All three albums, on repeat.
(May have a production deployment I'm ploughing through).
that baddy Xillian is giving me 1990s Beto Cuevas from La Ley. el que sabe, sabe.
Altbot
in reply to Pratik • • •The image is a black and white photograph of a modern skyscraper with a reflective glass facade. The building's surface is composed of numerous square glass panels, creating a grid-like pattern. The reflection on the glass shows a distorted image of another building, which appears to have a unique, wavy architectural design. The reflection is particularly noticeable on the left side of the image, where the glass surface curves inward, distorting the reflected image. The building's structure includes a protruding balcony on the lower level, adding to the modern aesthetic. In the foreground, there are bushes, providing a contrast to the sleek, urban environment. The sky is visible in the upper left corner, suggesting a clear day. The overall composition emphasizes the interplay between the building's architecture and its reflection, creating a visually striking scene.
Provided by @altbot, generated privately and locally using Ovis2-8B
π± Energy used: 0.217 Wh