This article is indeed very well written. Let's use AI to help interpret it (the interpretation is excellent and easier to understand than a direct translation). Hello everyone. Do you often see arguments online where one side speaks eloquently and cites a wide range of sources, sounding "very intelligent," but you always feel uneasy, as if "something is not right"? You're right. Sometimes, the more complex an argument is, the more it "explains everything," the more suspicious it becomes. Ethereum founder Vitalik Buterin recently wrote a long article specifically discussing this phenomenon. He coined a great term: "Galaxy Brain Resistance." "Galaxy Brain" is the meme that depicts a normal brain gradually evolving into a luminous brain that illuminates the entire universe. It is often used to satirize "smart people" who overthink, complicate simple problems, and ultimately arrive at absurd conclusions. Therefore, the "Galaxy Brain Resistance" index mentioned by Vitalik Buterin measures how difficult it is for a way of thinking or argument to be abused to justify "anything you want to do". This article is very important because it exposes one of the biggest myths of our time: we always think that debate is about finding the truth, but Vitalik Buterin points out incisively that in the real world, most fancy arguments are not about "reasoning" but about "rationalization". What's the meaning? This means that many people first draw a conclusion based on emotions, intuition, or self-interest (such as holding a certain coin or disliking a certain group), and then, in turn, mobilize all their intelligence to find seemingly sophisticated reasons to support that conclusion. This "shoot first, then draw the target" approach is utterly fanatical. And those arguments about "low defense index" are their favorite weapons, because they are all-purpose and can be used to defend almost anything. In his article, Vitalik Buterin named several of the most popular and dangerous "low-defense index" mindset traps. Let's take a look at how they are "deceived". Trap 1: The Trap of "Historical Inevitability" "This was bound to happen sooner or later, so not only should we not stop it, we should accelerate it!" Vitalik Buterin gave an example: AI enthusiasts in Silicon Valley. They would say, "The complete automation of the economy is 'inevitable,' and human labor is destined to be phased out. Therefore, we should accelerate this process now." Doesn't that make a lot of sense? The wheels of history keep turning. But Vitalik Buterin reminds us: Who said this? It was those companies that are fully committed to developing AI and making money from it. This is a classic example of the "low defense index" argument. It substituts a (perhaps) reasonable long-term prediction ("it will eventually be automated") with "so we should accelerate it now". Why is this argument so bad? 1. It makes you give up. It implies that your resistance is pointless. But Vitalik Buterin says, quite the opposite, when everyone is saying, “Give up, it’s inevitable,” that’s precisely when your resistance is most valuable. 2. It obscures other options. The reality is that there are more options than simply "going all out" and "shutting down." We could have focused on developing AI that assists humans, rather than replacing them, giving us more time for a safe transition. 3. It serves private interests. "Inevitability" is merely a fancy facade used to disguise their true motives of "pursuing profit and power." Trap Two: The Trap of "For a Grand Future" "For the grand blueprint of N years from now / For the trillions of people in the future, we must now..." Vitalik Buterin mentioned the term "longtermism". First and foremost, it should be clear that Vitalik Buterin is not against thinking long-term. Building roads, educating children, and saving for retirement are all necessary and correct forms of "long-termism." He opposes the abused, "obsessive" version of long-termism. For example: "For the happiness of 40 trillion (!) people who may exist 500 million years from now, we must sacrifice everything today to XXX." The crux of this trap is that when a goal is set too far away, it becomes detached from "reality". - If you say your project "will be profitable next quarter", then everyone will see the results next quarter, and it will be clear whether it is true or false. But if you say your project "can save the world in 500 years," who will come back in 500 years to verify it? This has changed the nature of games. Games are no longer about "who can truly create long-term value," but rather "who can tell the most impressive 'long-term story' today." V God gave two brilliant examples: 1. Bubbles in a Low-Interest-Rate Environment: When interest rates are very low, money is worthless, and people don't care about short-term returns. They then frantically chase various "narratives about the future," ultimately leading to bubbles and their collapse. (Think of those "blockchain solutions designed for the global dental industry.") 2. A political "bridge to nowhere": Politicians request huge budgets to build a bridge that no one will ever use, citing "long-term value" as justification. How do you avoid this trap? Vitalik Buterin offers a rule: when an action has "questionable long-term benefits" but "credible short-term (or long-term) harms," then don't do it. Trap Three: The Trap of "This Harms Society/Morality" "This thing is disgusting/immoral/damages the social fabric and must be banned!" Many people like to use the coercive power of the government to regulate other people's private lifestyles simply because they "can't stand it". For example, some people are calling for a ban on synthetic meat, arguing that "real meat was created by God, while artificial meat is man-made...this goes against nature!" But the reason "I can't stand it" is too straightforward, so they will package it as an argument with a "low defense index," for example: "This will destroy our moral structure!" "This will threaten social stability!" "This was imposed on us by the 'global elite'!" Vitalik Buterin said that the term "social moral structure" is too vague, so vague that you can use it to oppose any new thing you don't like. Homosexuality, new music, synthetic meat... have all been labeled with this term. Vitalik Buterin leans towards a more moderate form of liberalism: if you want to ban something, you must provide clear evidence that it has caused clear harm to a "clear victim". If you can't clearly explain who the victim was or what harm they suffered, you're likely just using sophisticated language to mask your personal biases ("I just feel disgusted"). Trap Four: The Trap of "This is for the poor/class advancement" "Speculation/gambling is not a bad thing; it is the only hope for the poor to achieve upward social mobility!" In the cryptocurrency space, we often hear this kind of voice defending high-risk speculation. This argument sounds very noble and full of compassion, but Vitalik Buterin believes it is extremely "deviant". Why? 1. It's mathematically wrong. Casinos are zero-sum games (or even negative-sum games). A poor person who goes in is more likely to become poorer. The basics of economics (utility curves) tell us that losing $10,000 is a far greater blow to a poor person than winning $10,000. This high-stakes game is "destroying" social classes, not "elevating" them. 2. Its motives are impure. Who are the real advocates of this argument? They are often those who are already wealthy and are using this "noble reason" to attract more people (poor people) to the market so that they can sell their shares and cash out. Vitalik Buterin has consistently urged people in the Ethereum ecosystem to focus on "low-risk DeFi" rather than "good DeFi". Why use the term "low risk"? Because "low risk" is a standard that is difficult to abuse; it has teeth, and whether the risk is high or low can be clearly seen from the data. But the word "good" is too easily misused. Anyone can concoct a "galactic brain" level argument to explain why his high-risk casino project is actually "good" for society. Trap Five: The Trap of "I can do more within the system" "I joined this company/government (that is accelerating AI/corruption) to change it from the inside." This is the most pointed criticism from Vitalik Buterin, which he calls "I'm doing more from within-ism". In the field of AI security, many people say, "I want to join the most radical AI companies so that I can have an impact at the 'critical moment' and ensure AI security." In realpolitik, many people say, "I stay in Putin's government to use my expertise to mitigate its damage to the economy." (V God quoted a Financial Times report on Russian technocrats.) V God believes this is almost the lowest "prevention of going astray index". 1. It provides the perfect excuse for "going along with the crowd." No matter what you're actually doing, you can claim, "I'm doing this to change things from the inside out." 2. It is almost always self-deception. The reality is that you ultimately become just an efficient cog in that machine, and all your professional skills objectively become "accomplices," making the machine you claim to be opposing run even more smoothly. So, how can we prevent ourselves from going astray? V God gave two very practical suggestions: 1. Uphold principles (instead of always thinking about "calculating the consequences"). Vitalik Buterin champions a "deontology" of morality. You don't need to understand the term; it simply means setting hard rules for yourself that you will never break. For example: "I will never steal," "I will never cheat," or "I will never kill an innocent person." Why is this important? Because of another kind of "consequence theory" ("as long as the result is good, I can do whatever it takes"), the risk of going astray is too low! Our brains are too good at "rationalization". If you believe in "consequentialism", then whenever you encounter a temptation (such as "this theft will bring me huge profits"), your "galactic brain" will immediately activate to argue for "why this theft is actually beneficial to the long-term well-being of all mankind". You can always convince yourself. Firm principles are your firewall against being "too smart". 2. Know which pockets you have (Hold the right bags) In Crypto slang, "Bags" refers to the assets you hold (your "positions"). Vitalik Buterin says that your motivation (your position) and your social circle (your "social pocket") are the most powerful forces shaping your mind. - Once you own a certain coin, it's difficult to view it objectively. - If all your friends say that AI is safe, it's hard for you to really believe that AI is dangerous. You can't live without "pockets" (people always need encouragement and friends), but you can at least: 1. Actively choose your "pocket." Stay away from incentives that distort your judgment. 2. Diversify your "pocket" (resources), especially your social circle. This leads to Vitalik Buterin's final two pieces of advice for AI security professionals, which also reflect his own commitment to these principles: 1. Don't work for companies that are accelerating the development of "fully automated cutting-edge AI." (Because it will distort your motivation.) 2. Don't live in the San Francisco Bay Area. (Because the social security options there are too limited.) Summarize Vitalik Buterin's article, on the surface, talks about AI, crypto, and politics, but in reality, it offers a universal guide to staying clear-headed in a complex world. The most dangerous arguments are not those riddled with flaws, but rather those "universal reasons" that are too flexible, too sophisticated, and can serve any motive. True wisdom is not about having a "galactic brain" that can explain everything, but about knowing when to stop "clever" calculations and return to simple, solid, and uncommon principles.
Translation: Resisting the Mind Trap of "Mind-Blowing" Ideas A crucial criterion for judging the quality of a way of thinking or argumentation is what I call the ability to resist galaxy brain resistance (a meme often used to satirize those "smart people" who overcomplicate simple problems or find seemingly sophisticated theories to support flawed viewpoints): How difficult is it to be abused? Can you use it to "justify" anything you (already want to do for other reasons)? This spirit is similar to falsifiability in science (a theory must be able to be proven wrong to be considered a scientific theory): if your argument can prove everything, then it can prove nothing. You should stop at step two. To understand why 'resisting wild imagination' is important, the simplest way is to look at what happens when it's absent. You've probably heard many similar statements: We are building a completely new decentralized marketplace that will revolutionize how customers interact with their suppliers and allow creators to transform their audiences into digital nation-states. $$ ... In the political arena, the situation can be far worse. For example: [A certain minority group] is responsible for much of the ongoing social disorder; they have drained our resources. If we could completely eradicate them (I mean 'completely' eradicate them, so they can never return), although it would be a brutal, one-off action, in the long run, if it could increase our economic growth rate by 0.5%, then 500 years from now, our country would be 12 times richer than it was then. This would result in countless happier, more fulfilling lives. It would be a grave injustice to allow our descendants to live in extreme poverty simply because we are cowardly and afraid to pay this one-off price today. To refute the arguments above, one approach is to treat them like math problems in a philosophy class: identify the specific premises or steps you disagree with, and then refute them. However, a more realistic approach is to realize that in the real world, these arguments are almost never 'reasoning'—they are 'rationalization' (i.e., a conclusion is presented first, followed by the reasoning behind it). Those people had already reached a conclusion, likely driven by self-interest or emotion (such as hoarding the token themselves, or genuinely, truly hating that minority group), and then they fabricated those fancy arguments to justify their position. The purpose of these fancy arguments is to: (i) deceive the speaker's own 'higher thought' into submitting to 'primitive instincts'; and (ii) attempt to bolster their movement by recruiting not only the deceived but also those who consider themselves intelligent (or worse, the truly intelligent). In this article, I will argue that a weak ability to resist imaginative thinking is a widespread phenomenon, with consequences ranging from mild to severe. I will also introduce some strong patterns of thinking that resist imaginative thinking and advocate for their use. Those thought patterns that are weak in "resisting creative thinking" Inevitabilism Look at this recent tweet; it's a prime example, perfectly showcasing the rhetoric of Silicon Valley's AI "advocacy": "Mass unemployment resulting from automation is inevitable. It is an unavoidable natural economic law. Our company (Mechanize Inc.) aims to accelerate this process and ensure a smooth transition, rather than futilely resisting it. Join us in building the future." This is a classic example of the inevitability fallacy. The post begins with a (perhaps reasonable) statement: the complete automation of the economy will inevitably happen eventually. It then jumps directly to the conclusion: we should actively accelerate that day (and the ensuing unemployment of the human workforce). Why actively accelerate it? Oh, we all know: because this tweet was written by a company, and the entire business of that company is actively accelerating it. Admittedly, 'fatalism' is a philosophical error, and we can refute it philosophically. If I were to refute it, I would focus on three points: - Fatalism over-assumes the existence of an 'infinitely fluid market,' where if you don't do it, someone else will immediately fill the gap. This might be true in some industries. But AI is quite the opposite: it's a field where the vast majority of progress is driven by a tiny minority of individuals and companies. If even one of them stops, things will slow down significantly. - Fatalism underestimates the power of collective decision-making. When an individual or company makes a decision, it often sets an example for others. Even if no one follows suit in the short term, it can sow the seeds for more action later. Standing up against something can even remind people that 'taking a courageous stance' itself can be effective. - Fatalism oversimplifies the choices. Mechanize could continue pursuing full automation of the economy. They could also shut down. But they could also reorient themselves to focus on building forms of 'partial automation,' empowering those still in the processes, maximizing the period where 'human-machine collaboration' outperforms 'pure AI,' thus buying us more time to safely transition to superintelligence. Of course, there are other options I haven't considered. In the real world, however, fatalism cannot be defeated by pure logic because it was not created as a product of logic. In our society, the most common use of fatalism is to allow people to rationalize their actions (for other reasons, usually the pursuit of political power or money) after the fact. Understanding this fact itself is often the best way to counter it: when others most want you to believe that "everything is irreversible" and persuade you to give up, that is precisely when you have the most leverage. Longtermism 'Longtermism' is a mindset that emphasizes significant stakes in the distant future. Today, many people associate this term with the longtermism of 'Effective Altruism,' as seen in this introduction on 80,000 Hours (a career advice organization related to effective altruism): If we only consider the potential future human population, this number is staggering. Simply assuming 8 billion people in each of the next 500 million years, our total population would reach approximately 400 trillion… and once we are no longer confined to Earth, the population potential we should be concerned about becomes truly enormous. But the concept of 'appealing to the long term' is much older. For centuries, personal financial planners, economists, philosophers, people who discuss the best time to plant trees, and many others have called for sacrifices made today for the greater good of the future. The reason I'm hesitant to criticize 'long-termism' is because, well, the long-term future is indeed very important. We wouldn't hard fork a blockchain just because someone got scammed, because while there might be a clear one-time benefit, it would permanently damage the chain's reputation. As Tyler Cowen argues in *Stubborn Attachments*, economic growth is so important because it's one of the few things that can reliably compound indefinitely in the future without disappearing or getting stuck in a cycle. Educate your children that the rewards are ten years from now. If you don't embrace 'long-termism' at all, you'll never build a road. You run into problems when you don't value the long term. One major issue I've personally opposed is 'technical debt' (compromises developers make to 'take shortcuts' that lead to extra work in the future): when software developers focus only on short-term goals and lack a coherent long-term vision, the result is software that becomes increasingly ugly and garbage over time (see: My efforts to simplify Ethereum L1). But there's a pitfall here: the "long-termism" argument has a very low resistance to wild speculation. After all, "long-term" is a distant concept; you can concoct all sorts of beautiful stories, saying that if you do X, almost anything good can happen in the future. In the real world, when we observe the behavior of markets and politics, we see the drawbacks of this approach time and again. In the market, the common variable distinguishing these two models is the 'interest rate'. When interest rates are high, only projects that can generate clear near-term profits are worth investing in. But when interest rates are low—well, the term 'low-interest-rate environment' has now become a well-known code word—it describes a scenario where a large number of people create and chase ultimately unrealistic 'narratives,' leading to bubbles and crashes. In politics, there's a common complaint that politicians engage in short-sighted behavior to please voters, hiding problems under the rug until they resurface after the next election. But there's another problem: the 'bridge to nowhere'—an infrastructure project launched on the grounds of 'long-term value,' but that 'value' is never realized. Examples include a 'bridge to nowhere' in Latvia, or Dentacoin—a 'blockchain solution for the global dental industry'—which once had a market capitalization of over $1.8 billion. The core problem in both scenarios is that thinking about the long-term future can lead to a disconnect from reality. In a short-term-oriented environment, while you may ignore the long term, there's at least a feedback mechanism: if a proposal claims short-term benefits, everyone can see whether those benefits actually materialize in the near term. In a long-term-oriented environment, however, an argument about 'long-term benefits' doesn't need to be correct; it only needs to sound correct. Therefore, while everyone claims they're playing the game of 'choosing ideas based on long-term values,' they're actually playing the game of 'choosing ideas based on who wins in a often dissonant and highly adversarial social environment.' If you can justify anything with a story of 'vague but enormous long-term positive consequences,' then a story of 'vague but enormous long-term positive consequences' proves nothing. How can we reap the benefits of long-term thinking without becoming detached from reality? First, I'd say it's really difficult. But beyond that, I do believe there are some basic rules of thumb. The simplest one is: do you have a solid historical record of what you're doing in the name of 'long-term interests' to prove that it actually delivers those benefits? Economic growth is like that. Preventing species extinction is like that. Trying to establish a 'single world government' is not—in fact, like many other examples, it has a 'solid historical record' of repeated failures and massive damage in the process. If the action you're considering has speculative 'long-term benefits' but reliable 'known long-term harms,' then… don't do it. This rule doesn't always apply, because sometimes we really do live in unprecedented times. But it's equally important to remember that the very phrase 'we really live in unprecedented times' makes it very difficult to resist wild ideas. Using "personal aesthetics" as a lame excuse to ban things I find sea urchins disgusting. You're eating the gonads of a sea urchin. Sometimes, at omakase (Japanese no-menu cuisine), they're even shoved right in my face. But even so, I still oppose banning it; it's a matter of principle. One behavior I despise is when someone uses the coercive power of the government to impose what is ultimately just 'personal aesthetic preference' on the personal lives of millions of other people. Having aesthetic sense is fine. Considering aesthetics when designing public environments is also good. But imposing your aesthetic sense on the personal lives of others is unacceptable—the cost of imposing it on others far outweighs your own psychological benefits, and if everyone wanted to do it, it would inevitably lead to cultural hegemony, or a 'political war of all against all'. It's easy to see that examples of politicians pushing for bans based on lame excuses like "Uh, I think this is disgusting" are practically a given. A rich source of examples is various anti-gay movements. For instance, Vitaly Mironov, a representative to the St. Petersburg Duma: LGBT people (referring to gay, bisexual, and transgender people) have no rights. In our country, their rights are not included in the socially significant list of protected values. These so-called 'perverts' possess all the rights they have as citizens of our country, but they are not included in some 'extended top-level list.' We will permanently remove them from our country's list of human rights issues. Even Vladimir Putin himself has attempted to justify the invasion of Ukraine by complaining that the United States has too much 'Satanism'. A more recent, slightly different example is the US campaign to ban 'synthetic meat': 'Cultivated meat' is not meat...it's man-made. Real meat is created by God himself...If you really want to try that 'nitrogen protein paste,' go to California. But many more people take a more 'civilized' approach, attempting to disguise their aversion with some kind of excuse. A common excuse is the 'moral fabric of society,' 'social stability,' and similar reasons. These arguments are also frequently used to justify 'censorship.' What's the problem? I'll let Scott Alexander (a well-known blogger who writes in-depth articles on topics such as rationality and AI) answer this question: The 'Loose Principle of Harm' refers to the idea that governments can be angry about complex, indirect harms—those that 'weaken the moral fabric of society'—but allowing the 'Loose Principle of Harm' to exist is tantamount to reviving all those ancient wars of 'attempting to control others' that 'liberalism' should have prevented. One person says: 'Same-sex marriage will lead to wider acceptance of homosexuality, resulting in higher rates of sexually transmitted diseases! This is harm! We must ban same-sex marriage!' Another person says: 'Allowing people to send their children to private schools may lead to children spreading anti-gay sentiments in religious schools, causing these children to commit hate crimes later in life! This is harm! We must ban private schools!' And so the cycle continues endlessly. A 'social moral structure' certainly exists—it's obvious that some societies are more moral than others in many ways. But it's also vague and undefined, making it extremely easy to abuse; almost anything can be labeled a 'violation of the social moral structure.' The same applies to the more direct appeal to the 'wisdom of repugnance,' which has been devastating to scientific and medical progress. It also applies to the emerging 'I just don't like it, so I'm banning it' narratives, one common example being the fight against 'global elites' and the defense of 'local culture.' Let's look at some of the statements made by those anti-synthetic meat fighters (remember, these people are not explaining why they don't eat synthetic meat themselves, they are explaining why they want to force others to accept their choice): > The 'global elite' want to control our behavior and force Americans to accept a diet of 'petrate meat and worms'. Florida is saying 'no'. I am proud to have signed SB 1084, which keeps lab-grown meat out of Florida and prioritizes the protection of our farmers and ranchers, not the agenda of the elite and the World Economic Forum. Some people might enjoy eating bugs with Bill Gates, but I don't. This is a major reason why I am largely sympathetic to moderate libertarianism (a political philosophy that emphasizes individual liberty and limited government). I would like to live in a society where banning something requires a clear story explaining the clear harm or risk it poses to a specific victim, and if that story is successfully challenged in court, then the law should be repealed. This greatly reduces the likelihood of the government being captured by (interest groups) and used to impose a cultural preference on the personal lives of others, or to wage a war of all against all. Defending "inferior finance" In the crypto world, you often hear terrible arguments trying to persuade you to invest in various high-risk projects. Sometimes they sound 'brilliant,' like how a project 'disrupts' (i.e., participates in) a trillion-dollar industry, or how unique this particular project is, doing what no one else has done. Other times, it's just 'the price will go up because of celebrity endorsements.' I don't object to people "having fun," including risking some of their money. What I object to is people being encouraged to invest half their net worth in a token that "influencers say will definitely rise," when the most realistic outcome is that the token will be worthless two years later. But what I object to even more is the argument that these speculative token games are "morally righteous" because the poor need this kind of quick 10x return to have a fair chance in the modern economy. Like this argument: "For someone with a net worth of $5,000, 'take it slow, invest in index funds' is hellish advice. What they need is upward mobility. They need to take on high-risk, high-reward bets. Cryptocurrencies (memecoins) are the only place in modern times that offers them that opportunity." This is a terrible argument. One way to refute it is to deconstruct and dismiss the claim that 'this is a meaningful or beneficial form of social mobility,' just like any other argument. The core issue of this argument is that casinos are a zero-sum game. Roughly speaking, for every person who rises in social class through this game, another person falls in social class. But if you delve deeper into the mathematics, the situation becomes even worse. One of the earliest concepts you'll learn in any standard welfare economics textbook is that a person's 'utility function' for money is concave (the original text mistakenly uses convex, but from the diagram and context, the author clearly refers to a 'concave function,' meaning 'diminishing marginal utility.' We'll translate it literally). The wealthier you are, the less 'utility' (satisfaction) you get from each additional dollar. Notice that the more money you have (horizontal axis), the lower the slope of the curve (the value of each dollar). This model has an important conclusion: random coin tosses (gambling), especially high-stakes gambling, are, on average, 'harmful' to you. The pain of losing $100,000 is greater than the pleasure of winning $100,000. If we build a model where you currently have $200,000, and every doubling of your wealth (i.e., an increase of 100% or a decrease of 50%) will move you up or down one social class, then if you win a $100,000 bet (your wealth becomes $300,000), you rise approximately half a social class; but if you lose the bet (your wealth becomes $100,000), you fall an entire social class. Economic models built by scholars who genuinely study human decision-making and try to improve people's lives almost always reach this conclusion. So, what kind of economic model would reach the opposite conclusion—that you should go all in to pursue a 10x return? The answer is: stories fabricated by people whose goal is to find a good story for the cryptocurrencies they're speculating on. My purpose here is not to blame those who are truly poor and desperate, searching for a way out. Rather, my aim is to blame those who are financially well-off, who use the pretense that 'the poor and desperate really need that 10x' to justify their actions of 'setting traps to lure the poor into deeper trouble'. This largely explains why I've been pushing the Ethereum ecosystem to focus on 'low-risk DeFi' (DeFi, Decentralized Finance). Enabling people in the Third World to escape the political collapse of their currencies and gain access to (stable) interest rates in the First World is a remarkable thing; it can miraculously help people rise in social class without pushing others off a cliff. Recently, someone asked me: why not say 'good DeFi' instead of 'low-risk DeFi'? After all, not all high-risk DeFi is bad, and not all low-risk DeFi is good. My answer is: if we focus on 'good DeFi,' then anyone can easily 'think outside the box' and argue that any particular type of DeFi is 'good.' But if you say 'low-risk DeFi,' it's a binding category—it's really hard to 'think outside the box' and call an activity that clearly bankrupts people overnight 'low-risk.' I'm certainly not against the existence of high-risk DeFi—after all, I'm a fan of prediction markets (platforms where you bet on the outcome of future events). But a healthier ecosystem would be where low-risk DeFi is the main course and high-risk DeFi is a side dish—something fun and experimental, not something that lets you wager half your life savings. The final question: Is the view that 'prediction markets are not just gambling; they benefit society by improving access to accurate information' merely a wildly imaginative, hindsight-based rationalization? Some certainly think so: "Prediction markets are nothing more than astrology played by college-educated men who use terms like 'cognitive value' and 'social utility' to cover up the fact that they are just gambling." Let me defend myself. The reason you can judge this as not 'post-hoc rationalization' is because the academic tradition of appreciating prediction markets and attempting to make them a reality has existed for thirty years—far longer than the possibility of anyone making a fortune from them (whether creating a project or participating in it). This 'pre-existing academic tradition' is something that memecoins or even more fringe examples like personal tokens lack. However, I repeat, prediction markets are not low-risk DeFi, so they are side dishes, not something that lets you stake half your net worth. Power maximization Within the AI-related 'Effective Altruism' (EA) circle, there are many powerful individuals who, if you ask them, will tell you explicitly that their strategy is to accumulate as much power as possible. Their goal is to occupy advantageous positions so that when a 'pivotal moment' arrives, they can step in with full force and massive resources and 'do the right thing'. 'Power maximization' is the ultimate 'mind-blowing' strategy. The argument 'Give me power so I can do X' is equally persuasive, regardless of what X is. Before that 'critical moment' (in the AI apocalypse theory, this refers to the moment before we either achieve utopia or all of humanity dies and becomes paperclips), your actions for 'altruistic' reasons look exactly the same as your actions for 'greedy self-aggrandizement'. Therefore, anyone trying to achieve the latter can tell you, at zero cost, that they are striving to achieve the former and convince you they are good people. From an 'outside view' (a method of correcting cognitive biases that emphasizes referencing statistical data from similar situations rather than relying on one's own subjective feelings), this argument is clearly absurd: everyone believes they are more moral than others, so it's easy to see that even if everyone believes maximizing their power is a net gain, it actually isn't. But from an 'inside view', if you look around the world and see hatred on social media, political corruption, hacking, and the unbridled actions of other AI companies, then the idea of 'I'm the good guy, I should ignore this corrupt outside world and solve the problems myself' certainly feels appealing. And this is precisely why adopting an 'outside view' is healthy. Alternatively, you can take a different, more humble 'inner perspective'. Here's an interesting argument from the 'Effective Altruism' forum: Arguably, the greatest advantage of investing lies in its ability to exponentially grow financial resources that can later be used for philanthropy. Since its inception in 1926, the S&P 500 has achieved an annualized, inflation-adjusted return of approximately 7%... The risk of 'value drift' is more difficult to estimate, but it is a significant factor. For example, sources consistently indicate that the average annual 'value drift rate' among individuals within an effective altruistic community is approximately 10%. In other words, while your wealth may indeed grow by 7% annually, empirical data also suggests that if you believe in a cause today, your belief in it may decrease by about 10% tomorrow. This aligns with an observation by Tanner Greer: public intellectuals often have a 'shelf life' of about 10-15 years, after which their ideas are no longer better than the background noise around them (as for the significance of the fact that I started publishing my writing in 2011, I leave that to the reader to judge). Therefore, if you accumulate wealth for the sake of 'acting later,' your future self may very well use that extra wealth to do things that you yourself now wouldn't even support. "I can do more internally" In the field of AI security, a recurring problem is a mixture of 'power maximization' and 'fatalism': the belief that the best way to advance AI security is to join companies that are accelerating the realization of super-intelligent AI and try to improve them 'from within'. Here, you often get rationalizations such as: “I am very disappointed with OpenAI. They need more safety-conscious employees like myself. I am announcing that I will join them to drive change from within.” From an 'internal perspective', this seems reasonable. However, from an 'external perspective', you essentially become like this: Character A: "This place is terrible." Character B: "Then why haven't you left yet?" Character A: "I have to stay and make sure it gets worse." Another good example of this school of thought is the political system of modern Russia. Here, I'd like to quote this article from the Financial Times: On February 24, three days after Putin's acknowledgment of the Donbas separatists, he launched a full-scale offensive against Ukraine, exceeding their worst expectations. Like the rest of the world, they only discovered Putin's true intentions through television. Putin's failure to heed the warnings of technocrats dealt them a heavy blow. A former executive who met Gref (CEO of Sberbank) early in the war said: "I've never seen him like that. He was completely broken, in utter shock. '...' Everyone thought it was a disaster, and he thought so more than anyone else.'... ...In the narrow circle of Russia's political elite, technocrats like Gref and Nabiullina (Governors of the Central Bank of Russia), once considered modernists and reformist forces to counterbalance Putin's 'siloviki' (seniors of the hardline security apparatus), backed down when faced with a historic opportunity to defend their beliefs in open markets and publicly oppose the war." According to former officials, these technocrats, instead of breaking with Putin, have solidified their role as 'enablers,' using their expertise and tools to mitigate the impact of Western sanctions and maintain Russia's wartime economy. Similarly, the problem lies in the phrase "I can do more internally," and the ability to resist "thinking outside the box" is incredibly low. It's always easy to say "I can do more internally," regardless of what you actually do. So you end up just being a cog in the machine, playing the same role as those cogs next to you—those who work to provide for their families in mansions and expensive dinners every day—only your reasons sound better. So, how can you avoid "thinking outside the box"? You can do many different things, but I will focus on two: Adhere to principles Draw a hard line for what you 'don't want to do'—don't kill innocent people, don't steal, don't cheat, respect other people's personal freedom—and set a very high threshold for any exceptions. Philosophers often call this 'deontological ethics' (emphasizing obligations and rules, rather than outcomes). Deontology confuses many—naturally, if there's a fundamental reason behind your rules, shouldn't you pursue that reason directly? If 'not stealing' is a rule because stealing usually harms the victim more than it benefits you, then you should follow the rule of 'not doing things that do more harm than good.' But if sometimes stealing is 'more beneficial than harmful'—then steal! The problem with this consequentialist approach (focusing only on results, not process) is that it has absolutely no capacity to resist wildly imaginative thinking. Our brains are very good at fabricating reasons to argue that "in this particular situation, what you (for other reasons) have long wanted to do happens to be of great benefit to all mankind." Deontology, on the other hand, would say: No, you can't do that. One form of deontology is rule utilitarianism: rules are chosen based on what brings the greatest benefit, but when it comes to choosing specific actions, you simply follow the rules you have already chosen. "Grasp" the right "burden". Another common theme mentioned above is that your behavior is often determined by your 'incentives'—in crypto jargon, what 'bags' you hold (referring to the assets or positions you possess that, in turn, influence your views and actions). This pressure is hard to resist. The simplest way to avoid this is to avoid setting bad incentives for yourself. Another corollary is to avoid clinging to the wrong social baggage: the social circles you belong to. You can't try to be without social baggage—doing so goes against our most basic human instincts. But at the very least, you can diversify them. The simplest step to doing this is to choose your physical location wisely. This reminds me of my personal suggestion regarding the already overused topic of 'how to contribute to AI safety': - Don't work for a company that's accelerating the development of 'cutting-edge, fully autonomous AI' capabilities. - Don't live in the San Francisco Bay Area

