Beginner here... do you guys see MMT as inextricably linked to FJG?

You don't think that giving a subsistence-level monetary allowance is actually "good" for people. Instead, you think that a "small amount of pain" (i.e., labor) is good.

I do think that, but I definitely do not think that alone precludes the possibility of a UBI being a policy we should implement. I think that the biggest issue with UBI is simply one size fits all dramatic policies have not been a success historically. For most policies, there is a way to do them wrong, and there is a way to do them right. One might even expect that the potential for good if you do something right, and the potential for harm if you do it wrong, are related. UBI is essentially a micro-economic solution to macroeconomic problems. I believe that should affect how we approach any analysis and vetting of the policy proposal. The micro-economic benefits can be measured, but the most important thing for vetting is developing a range of potential learning outcomes, for the possibilities of inference.

I think we owe it to UBI to be tried out.

I have a lot of experience with software development. Software testing is a completely different beast than other kinds of science. In normal scientific efforts, you want a statistically significant number of independent trials that can be carried out under carefully vetted conditions.

Because software engineers can control the environment so much, and because there is such a huge number of tiny things that need to be tested--every line of code would theoretically need to be tested independently, and also in the context of the entire system, such as A/B testing-- the testing methodology of software is very different from other the approach of testing in other sciences.

When you are testing software, often the problem shows up in an area that is different from what you expect, but there are layers of assumptions you have make that go into that. One of the most important skills is identifying the layers of your assumptions. What I have learned from debugging software is 1. The compiler is always right. This means that systems that have been thoroughly used and tested by a wide number of people, over a long period of time, are very unlikely to contain bugs(at least in software). 2. You are always making more assumptions than you think.

So often the best approach to finding a difficult bug, is saying "I am a silly idiot". Both parts are important. Sometimes we can get the complex challenges right, but miss something apparently trivial or unimportant, at least that's my experience personally, both in software and playing chess. This means that your mistakes are unlikely to show any sort of pattern or regularity, they might even come across as fanciful or playful, if you were in the right mood. Secondly, you have to tell yourself you're an idiot, because to get where you are, you have been implicitly told that are qualified to do what you are doing, and this leads to the worst kind of bias. I would call this the "way the world is supposed to work" bias. You are supposed to get an education and then be qualified and perform a career. That is the narrative. We will work hard to preserve our notion of self, perhaps harder than anything. Presenting yourself as a fancifully deranged imbecile is the best way to actually track down what went wrong. You make no assumptions as to how the world is supposed to work, you are not fulfilling some psychologically ingrained notion of self.

Because of the high cost of thoroughly testing software, and the ability of humans to master and learn critical thinking about software, software is never tested to the level of what would be considered statistical empirical evidence in the scientific community, and yet we rely on it day in and day out. Software runs on anecdotal evidence. Companies are often happy to get metrics like 95% code test coverage, and a "code review" is basically someone else looks at the code you right and listens to you explain it, which is nothing remotely like the process of scientific peer review.

Economics faces often faces exactly the opposite problem as software development, in that there is so little relative control over the environment, it is very hard to make assumptions, and conducting experiments is not only very expensive, it is often not possible at all. In software it is extremely easy to conduct experiments, but there are just so many different things that you need to test, you have to be selective about your experiments to get anywhere.

Even though economics is the polar opposite of software in this respect, I think it could benefit from many of the same lessons, and it is facing a similar challenge: the sheer difficulty of doing proper testing.

When I write software, my goal is to make sure my mental model is aligned with reality, and I do that by thinking critically about all the assumptions I am making, and then, being extremely selective about what assumptions I am going to test and evaluate. It is simply to costly and prohibitively difficult to methodically test every possibility, you have to leverage the principle of learning.

The only thing you need for learning is some form of authentic feedback. Other than that, the most useful thing is probably self awareness about what you are actually learning from that feedback. Learning is not the same thing as scientific testing or evidence. It is a process whereby someone masters or gains proficiency in a specific art. That ideal learner may not be able to tell you everything that is going on, but they can demonstrate a high degree of proficiency in their art. In most respects, learning resembles magical thinking and trusting anecdotal evidence, but it actually works when you are just trying to master the ability to take a specific sequence of actions for a specific outcome. Again, this does not mean you can accurately describe why things are happening, but you are able to create a mental model that allows you to perform a task proficiently.

Most of our social systems should be considered learning systems. They evolve over time. The political and social infrastructure may not be justified by coherent logic, but it does demonstrate adaptive mastery.

The point I am trying to make, is that I don't think UBI should be 'tested', at least not 'empirically'--that is too vague and too expensive to do properly. I think it should be vetted. And then you should commit to it or not. If someone tries to ollie and they land flat on their face, that doesn't mean it's impossible or not a useful thing to do. Don't get me wrong, I think a UBI would be incredibly useful. But I don't think it would be trivial, I think it would be very impactful, and I think it is something that would be possible to do wrong, with very bad stuff happening. I am not willing to try impactful things without demonstrating mastery, I think the only shot of a UBI is a private participatory system, with crypto, where people try out a million things until it works. I tried to learn to ollie on a skateboard, and I can do it while standing still, but not while moving. I am not going to try to drop in from a 30' half pipe. I have skated thousands of miles for transit, but can't even ollie properly. We have to understand scale, and relevance, the individual benefits of BI, say nothing about what a UBI changes for society. I am an old geezer content to keep the rear wheels of my skateboard on the ground. I am not going to support UBI.

/r/mmt_economics Thread Parent