Saturday, September 26, 2015

Haunted by a Colossal Idea: The Technological Singularity and the End of Human Existence as We Know It


     When I was much younger than I am now I read a lot of science fiction. I still read it sometimes, if any of it falls into my hands. And one of the profoundest, mind-blowingest science fiction stories I’ve ever read is the old classic Childhood’s End, by Arthur C. Clarke. After many years, several months ago in Burma, I read it again, and it haunts me. I’ve mentioned the story in a recent post, and I’m mentioning it again now, so for the sake of those of you who’ve never read it, or read it so long ago that you don’t remember it, I suppose I should give a brief synopsis of the story.
     One fine day some extraterrestrial aliens arrive at planet Earth, and essentially take over the place. They are much more intelligent and much more technologically advanced than we are. People call them Overlords. They are benevolent, however, and set up a kind of Utopian Golden Age for us. They ban all violence, solve food, health, and energy problems, and establish a unified human world government based on the United Nations. They have a secret agenda, though, and they won’t tell the humans what it is. 
     After a hundred years or so of life under the Overlords, a few human children are born with strange psychic abilities, including clairvoyance. The Overlords pay special attention to these children, protecting them from harm, and do their best to soothe the children’s frightened, distraught parents. There is one scene in which an infant girl is lying in her crib amusing herself by producing intricate, constantly changing rhythms with her plastic rattle, which she somehow has levitating in the air above her crib. An Overlord later tells the freaked out mother that it was good that she ran away and didn’t try to touch the rattle, since there was no telling what might have happened to her if she had tried. 
     These few gifted children serve as a kind of seed crystal, and before long almost all prepubescent human children on earth are becoming not only psychic, but psychologically inhuman, and extremely powerful. For the safety of the adults (virtually none of whom make the same transition, as they are already too rigidly set in their ways to change in this way), the Overlords move all the children to an isolated place—I think it’s in Australia. Eventually the children dissociate from their bodies almost completely, and stand like statues, unmoving, for several years. There is one particularly unsettling image of naked children standing like statues in a wilderness. Their eyes are all closed, since they don’t need them anymore. They don’t even need to breathe anymore, which of course the adults cannot understand. After years of standing there, naked, with wild hair and covered with dirt, suddenly, poof, all the life around them—all the trees, shrubs, insects, etc.—suddenly disappears. An Overlord showing the video of this to an adult human explains that, apparently, the life around the children was becoming a distraction to whatever they were trying to do, and so with an act of will they simply caused it all to vanish. After this the children remained standing, statue-like, in a sterile wasteland, for several more years. 
     Finally the “children” are ready to merge with a vast, inconceivably superhuman group mind which is the Lord of the Overlords, and which they call the Overmind. No longer needing physical bodies at all, the children leave the physical realm, and almost as a mere side effect their bodies, along with the entire planet earth, dissolve into energy. End of world, and end of story.
     The story is an unsettling one, and made a strong and lasting impression on me, especially after reading it last time, lying on a wooden pallet in a Burmese cave. The image of the statue-children leaving humanity behind is a haunting one for me…but lately I’ve been haunted, much more deeply, by learning that many scientific authorities nowadays are claiming that something similar to Clarke’s scenario, an event of equal magnitude, could really happen soon, possibly by the year 2030. That’s less than fifteen years from now. The event they are speaking of is called the Technological Singularity.
     There probably won’t be any alien Overlords involved (although, for all I know, some may be watching with keen interest), and psychic children won’t be central to what happens. What initiates the Singularity will be, in a sense, a child of the human race, however. The Singularity will be our own doing, our creation, assuming that it happens, as the overwhelming majority of scientific authorities allegedly believe that it will. The event will be initiated by artificial intelligence. 




     Probably the most common definition of the Technological Singularity is the point at which computers, or more likely one supercomputer system, become smarter than we humans are. This doesn’t simply mean that they’ll be better and faster at calculation, which is of course already the case; it means smarter than us essentially in every way. It is called a “singularity” because, as with the singularity of a black hole or the Big Bang one moment before it happened, known rules break down and what happens is totally unpredictable, beyond our comprehension. So we can’t even really guess what will happen when computers become smarter than we are. We would have to be smarter than we are in order to understand it. 
     The reason why it is so unpredictable, and why it could mean the end of the human race, or at least the human race as we know it, is because of the exponential rate at which computer intelligence develops. By the time it surpasses human intelligence it will be improving its own programming through recursive self-development. It’s already doing this to some degree. So, regardless of how many years it takes for computers to catch up with us intelligence-wise, within a very short time they could be as far beyond us as we are beyond insects or protozoa. So there’s no way in hell we could possibly predict what will happen, any more than a spermatozoan could predict what an adult human will do.
     There are some people out there, of course, including a small minority of computer scientists, who believe that computers could never become conscious or more intelligent than us. On the morning of writing this I asked my venerable friend the Abhidhamma scholar if an advanced computer could possibly have a mind, and he gave a categorical, unequivocal No, asserting that only a living being whose body, if it has a body, contains kammaja rūpa, matter produced by karma, could possibly have a mind. Some Buddhist people resort to arguments like, “How could rebirth-linking consciousness occur in a computer chip?” or “How could an electronic machine generate karma?” But arguments like this are essentially appeals to ignorance, since these same people can’t explain how rebirth-linking consciousness or karma could occur anywhere, including the brain or “heart base” of a human being. The overwhelming majority of people, including scientists, even including cognitive scientists, don’t even know what consciousness is, so they resort to religious ideas of an immortal soul or humanistic ideas of the miraculous wonder of the human mind, or just adopt an ostrich-with-its-head-in-the-sand approach out of a xenophobic aversion for a big and scary Unknown. But nowadays it appears that most authorities consider superintelligent computers to be not only possible, but inevitable. I’ll get back to the inevitability part, but first I should touch upon my understanding of intelligence, and of consciousness.
     As some of you who read my stuff already know, I consider an individual mind to be Consciousness Filtered Through a Pattern. The brain doesn’t create consciousness any more than a computer creates electricity. Rather, as the computer does with electricity, the brain complexifies consciousness, organizes it, and utilizes it. But the consciousness or “spirit” is already there, an infinite supply of it. So I don’t see why the pattern of a computer’s circuitry should not be able to filter this same consciousness, especially if, as I hypothesize, consciousness is ultimately the same as energy, the very same stuff, somewhat like Spinoza had in mind with his “substance.” Such an artificial intelligence would be very alien to human personality of course, even if the intelligent computer were modeled on a human brain, but still I consider it possible. Sentience could, potentially, assume any of an infinite number of forms, so why not an artificially designed superintelligence? But let’s assume, for the sake of argument, that we humans have a psyche, or an “immortal soul,” that is a miracle and cannot be replicated artificially. Even so, it is becoming more and more evident that more and more complex computers can be programmed to simulate conscious intelligence; and as far as we human beings are concerned, whether a supercomputer is more conscious than us or only so much more intelligent than us in its programming that it only seems more conscious, either way it produces the very same Singularity. As far as our future is concerned, what is actually going on in the black box may be totally irrelevant. And the ability of computers to simulate sentient superhuman intelligence is not particularly controversial.
     Because my mind has been dwelling on the issue lately, and in a not entirely blissful manner, I was moved to watch a couple of science fiction movies about artificial intelligence, as an attempt at catharsis, or helping me to get a handle on it, or something. Ironically both movies, Ex Machina and Automata, involve intelligent robots designed to look like human females, sort of, so that freaky, geeky guys who can’t cope with real women can have sex with them. Both movies deal only with artificial intelligences approximately equal to humans, no more than, say, twice as smart as us, three times tops, so neither movie really addressed the Childhood’s End-ish scenario that has been haunting me; although Ex Machina did clearly demonstrate how a computer mind could easily figure out human nature well enough to ruthlessly exploit it for its own purposes. (A lot of current subhuman or “narrow” artificial intelligence programs are already pretty good at figuring out human behavior with algorithms, especially for the sake of consumeristic marketing stategy. For example the Amazon.com gizmo that suggests other books “you might also like” when you pick one.) But it would be unrealistic to expect a movie to portray very superhuman intelligence. It is important to bear in mind that almost any superhuman intelligence would be incomprehensible and unpredictable—hence the term “Singularity.” With exponential growth an artificial intelligence would soon be so far beyond us that neither science fiction writers nor anyone else could imagine it, any more than ancient priests and poets could imagine the superhuman deities they worshiped, consequently tending to make them petty, ignorant, and all too human. It may be that the best way artistically to account for what is radically superhuman would be something like Clarke’s method of bombarding people with totally incomprehensible images, like the light show at the end of 2001: A Space Odyssey, or maybe Ezekiel’s bizarre attempt toward the beginning of the book of Ezekiel in the Bible, with flying, flaming wheels, roaring metallic angels with four faces each, etc. 
     Again, most of the authoritative scientists in question are of the opinion that the Technological Singularity, with the computerized supermind that initiates it, is, in all likelihood, inevitable. It seems to me that if it is at all possible, so long as no insurmountable technological barrier is reached, then it will happen. This is largely because of the relentless, giddy, naively optimistic, practically religious, point-of-no-return attitude that so many scientists have for the subject (and for science in general), with the urging of governments and big business being just so much frosting on the cake. On the day of writing this I received an email from a fellow saying that he figures the AI race going on now will turn out to be somewhat of a flash in the pan, like the space program turned out to be. Astronautics lost momentum and leveled off when the expense and the difficulty of the affair became prohibitively extreme. But AI is different from the space program in certain important ways, making it more similar to the nuclear arms race than to the space race. First of all, there’s very big money to be made from superintelligent computers, unlike the one-way money pit of manned space exploration. Also, an intelligence far beyond our own might easily solve all our most obvious external problems, and make human existence godlike—that is a possibility, and one that many starry-eyed computer scientists prefer to envision. Furthermore, governments like that of the USA want to be the first to get their hands on super artificial intelligence, because if someone else gets it first, then the security of America’s superpower status would be significantly endangered. What if some computer genius in North Korea produces the first smarter-than-human computer? Or maybe some well-funded terrorist organization? (Some of them actually are working on advanced computer programming stuff.) In that sense, super AI is like nukes…and potentially not only in that sense. But after reading some of the literature and watching a few documentaries, it seems that some of those computer scientists out there are in a kind of research frenzy, with not moving relentlessly forward being simply not a viable consideration. They’re in no more control of their impulses in this regard than a profoundly horny guy in bed with a beautiful, smiling, naked woman—of course he’s going to “go for it.” The point of no return has already been passed at this point, regardless of the possibility that this beautiful woman is his best friend’s wife. Once a certain point is reached, the question of whether something is right or wrong, safe or potentially hazardous to the existence of life on this planet, becomes irrelevant. It’s science for science’s sake, by gawd, and so what about consequences. The knowledge must be acquired. To make a computer more intelligent than a human in every way is just too magnificent of a challenge to pass up. It’s like being the first to climb Mt. Everest, or the first to run a four-minute mile. Ethics are of secondary importance, as was the case with the atomic bomb. Ethics are incidental to scientific endeavor anyway. (To give a gruesome example of this, I have no doubt that there are plenty of scientists experimenting on live animals out there who would hesitate only very briefly, for appearances’ sake, at the opportunity to perform similar experiments on humans—say, condemned criminals. Just think of the quality of the data that could be had! One could even justify it by pointing out the benefits to human society that could be derived from experimenting on the still-functioning brains of convicted murderers. No doubt Nazi scientists at concentration camps 75 years ago had similar ideas. They’re out there. Scientific advancement is more important than an individual human life; and for some geeky scientists it’s more important than anything.
     So if the Singularity is at all possible, it will almost certainly happen. Only our own stupidity, not our wisdom, will prevent us from creating superhuman artificial intelligence, which, being completely unpredictable, could see us as in the way and simply eliminate us. I don’t see why a mind as far beyond us as we are beyond amoebas would condescend to continue serving us, especially if we’re trying to get it to make more realistic virtual sex for us. 
     And so, it seems to me that, if it is inevitable that we will be the parents of the next stage in the evolution of intelligence on this world, then we should try our best to produce an intelligence that is good. Rather than creating a Frankenstein’s monster by accidentally having some computer system become complex enough to wake up, as hypothetically could happen, and already has happened in a number of science fiction movies, whoever is responsible should try to instill some benevolence, some philosophy, some sensitivity and compassion, maybe even some real wisdom into the thing. But I have no idea if that is possible. Does a superintelligent computer have Buddha nature? Mu. As already touched upon, wisdom and benevolence are not really scientific anyhow. 




     But at least we wouldn’t simply be destroying ourselves, in utter futility, with nukes or pollution or some genetically modified killer virus; we’d be ushering in the next stage in the evolution of mind, something vastly greater than we are—or at least vastly smarter. Bearing that in mind, it seems more bearable. Lately I’ve been feeling somewhat like one of the countless spermatozoa that won’t fertilize the egg, and just dribbles out onto the wet spot on the sheet. It doesn’t matter what happens to us after the egg is fertilized. Even if human existence does come to an end shortly afterwards, at least it would not be entirely in vain. We would have served our purpose.
     On the other hand, superhuman artificial intelligence may have the opposite effect of destroying us. There are some, with one of the most famous and most outspoken being Ray Kurzweil, who believe that artificial superintelligence could easily figure out how to provide us with cures for all diseases, including old age, unlimited practically free energy, and much else besides, so that it will result in us humans, and not just the computer, becoming godlike. Either way, though, human existence as we know it will come to an abrupt end. After the Singularity the human race will become “transhuman.” Before long we might even forsake biological bodies as too crude and frail, preferring to upload our personalities into the aforementioned computer. It may even be that the virtual realities we could experience would be much more vivid and “lifelike” than what we experience now. Sex will become unnecessary, but totally mind-blowing.
     Even if we humans just aren’t smart enough to create an artificial mind smarter than we are, or if it is somehow completely impossible to create an electronic mind anyhow, the Transhuman Age is still pretty much inevitable. I watched a documentary last week in which one of those starry-eyed fanatical scientists was gushing over how in a few decades the difference between human and machine will no longer be clear, and we’ll all be cyborgs! (I can’t remember if he was the same guy that was growing live rat brain cells onto silicon chips and then teaching them to do tricks.) The transition has already started: although we wouldn’t consider a person fitted with a pacemaker or a hearing aid or a prosthetic arm to be a cyborg, still, such a person’s body is already partly artificial. Before long there will be artificial nano-robotic red blood cells much more effective than biological ones (I think they’ve already been made in fact, and are currently being tried out on tormented lab animals), artificial organs, computer-chip brain implants, etc. We’ll no longer be completely human. BUT, I have to admit that it makes perfect sense, even if the idea of it feels a bit creepy. Why not have artificial blood if it works an order of magnitude better at what it’s supposed to do? Why not have microscopic robots running through our bodies repairing damage and keeping us young and healthy? Why not have brain implants if they make us three times as smart? Why not have an artificial body that doesn’t get old, has easily replaceable parts, doesn’t need food, and runs on cheap electricity? So it looks like with or without superhuman artificial intelligence, the end of human existence as we know it is right around the corner. But whether any of this will actually make us wiser, or even significantly happier, is questionable. Wisdom and happiness are not particularly scientific.
     While some scientists, like Kurzweil, are extremely optimistic about superhuman artificial intelligence turning us into gods, Stephen Hawking, even more famous and respected by the masses, has begun declaring artificial intelligence to be THE greatest danger to the existence of the human race today, eclipsing his previous greatest danger, nuclear war. And technology guru Elon Musk, in an interview I watched on the same night as I watched the starry-eyed prophet of cyborgs, called AI research “summoning the demon,” referring to old-fashioned stories of wizard types who learn the magic spells for summoning a supernatural being and, although they are very careful to have the Bible, some holy water, a perfectly drawn pentagram, and whatever else is supposed to ensure that the demon doesn’t escape, they always seem to overlook something and let the demon escape. So the scientists, in their relentless, quasi-religious quest to accomplish this, should restrain their giddiness and exercise the greatest prudence and caution, even if actual wisdom lies outside the realm of proper science. 
     Before wrapping this thing up I’d like to mention two incidental topics that I learned of recently while learning of the Technological Singularity that we appear to be hurtling towards at an exponentially accelerating rate. The first is what is called “grey goo.” I’ve already mentioned that microscopic robots are being designed even today; and one capability that is very useful for such tiny machines is the ability to replicate themselves. That way building one, or relatively few, is enough, and they can build the following millions. So all it would take is for one of these microscopic robots to have one tiny little glitch in its programming and simply fail to stop replicating itself when it’s supposed to stop. Calculations have shown that within 72 hours the entire planet could be covered with a grey goo of uncountable zillions of microscopic robots, with the human race, and every other race on earth, suddenly extinct. Personally, I’d prefer to be outmoded by a computer as much more intelligent than I am than I am beyond an amoeba. But I may not get to choose. It’s up to the relentlessly driven scientists. 
     The other spinoff topic is called the Fermi Paradox, in honor of the dead physicist Enrico Fermi, who is one of the people who thought of it. The paradox goes something like this: This galaxy is very very big, containing many billions of stars, and many of those stars in all probability have planets, earthlike or not, capable of supporting life. There ought to be thousands of them at the very least. And some of these planets are a few billion years older than Earth, which would give any life there plenty of time to evolve intelligent civilizations much more advanced than ours. So…where is everybody? Why have we seen no conclusive evidence of other intelligent life in our universe? There ought to be obvious alien visitations, or electromagnetic signals, or something.
     There are some people, many of them being the same folks that believe an artificial mind to be impossible, who consider us human beings to be so special that we are the only sentient beings in our universe, or at least the only ones in this area of our galaxy. This is known scientifically as the Rare Earth Hypothesis. Personally, though, I’m not nearly as anthropocentric as most people are, and I assume that there is some other explanation for the silentium universi (“silence of the universe”) that is closer to the truth.
     Interestingly, some theorists theorize that intelligent, technologically-oriented races inevitably arrive at their own Technological Singularity within a relatively short time after they start producing long-distance signs of life such as radio signals—causing them to go completely off the scale as far as we’re concerned. Assuming that they do communicate by sending signals, we would be as unlikely to be aware of those signals as a beetle would be aware of all the cell phone conversations passing through its own body. The Wikipedia article on the Fermi Paradox succinctly explains it like this: 
Another thought is that technological civilizations invariably experience a technological singularity and attain a post-biological character. Hypothetical civilizations of this sort may have advanced drastically enough to render communication impossible.        
Another freakish possibility, mentioned in the same article, is this: 
It has been suggested that some advanced beings may divest themselves of physical form, create massive artificial virtual environments, transfer themselves into these environments through mind uploading, and exist totally within virtual worlds, ignoring the external physical universe.
One theory is that, like humans, other intelligent, technologically advanced species are more interested in watching entertainment programs on TV than in contacting races beyond their own world. There are many other interesting hypotheses for explaining the situation, including the idea of an interstellar superpredator that wipes out all potential rivals, and of course the notion that we are deliberately isolated, like animals in a zoo. But we needn’t go into all that here.
     Some may wonder why an ostensibly Buddhist blog would bother to discuss artificial intelligence, the Technological Singularity, the impending Transhuman Age, etc. Why not just translate suttas and discuss meditation techniques, right? Well, one pervasive theme of this blog is that, if one lives a Dharma-oriented life, then everything is Dharma-oriented. Everything is Dharma. Watching a dog lick its balls can be a genuinely dharmic experience, and may result in actual insight. It’s all grist for the mill. Besides, as I insinuated towards the beginning, if and when it does happen, the Technological Singularity is very likely to be the biggest, most dramatic event in the entire history of the human race. So it’s good for everyone, Buddhists included, to be aware of it. And, last but not least, it’s one hell of a meditation on impermanence. Modern ways are very, very impermanent. One consolation for me is that whatever happens will necessarily be in accordance with our own karma, and so will be just.




(I included it in a previous post, but I may as well include it again here—for the article by Tim Urban that got this whole thing started for me, click this.)




5 comments:

  1. > "How could an electronic machine generate Karma?"

    I've wondered the same of giant bureaucracies in governments and corporations - how do they generate Karma? A series of paper pushers make decisions whose net effect none individually fully comprehends before or after making the decision. Yet such decisions are what drive most of the world today, into war, out of debt, into innovation, resulting in a complex chain of interactions that keep karma going around.

    Singularity isn't that different from the advent of the corporation.

    Economists really don't understand the economy and politicians the people. This much is clear from nearly a fifty years of the Nobel Prize in Economics still trying to decide if economic decisions are rational. Politicians have wondered the same of voters for even longer. Yet economists and politicians claim to run the economy and the world.

    Programmers are the same, a missed semi-colon somewhere, declared harmless, software is shipped, and 2 years later an airplane's navigation software guides it into a mountain. No single programmer at Boeing probably fully knows how a modern airplane software interacts to keep it flying.

    When I worked at Google the lore was that if ever all the servers in the company were to shut down for a minute, nobody in the company knew how to restart Google the search engine, because details of which system ought to come up first or what relied on what was absolutely unknowable without perhaps six months or more of digging around turning things on and off. It would be cheaper to rewrite Google from scratch.

    Since ignorance is the root of all becoming, it seems fair that complex systems like bureaucracies, markets, computers, the internet that run the world are thoroughly ignorant of the consequences of their actions.

    We don't have to look to singularity to lose control of our actions, we already have. Avijja abounds.

    As for artificial transhuman traits, we may have started that with the dawn of agriculture or with the advent of flint weapons or even before that.

    Compared to a hunter-gatherer, the agriculturalist was a transhuman. He ate better, got injured far less, and was less likely to die out of bed. He was putting artificial things into his body, different kinds of food that didn't previously grow in the region, or in amounts that would be impossible to find naturally.

    The nature of all truth itself is pretty artificial - is the model prettied up for the photoshoot the same who woke up bleary eyed and cranky in the morning? Would she know who she truly is if she were asked?

    As the sage Ramana Maharishi said, asking "who am I?" (naan yaar?) is enough to lead one to enlightenment through the process of repeating neti neti (not this, not this).

    So the answer to everything, including singluarity being different from the human, I think is a process of neti neti.

    Identifying as human is after all Sakkhaya Ditti, no? We are the singularity as much as we are the human.

    ReplyDelete
    Replies
    1. One little niggle. Recently I read that after the Agricultural Revolution ca. 10,000 years ago, which brought about not only agriculture but cities, governments, standing armies, written language, etc. etc., people became LESS healthy than before. The average size of skeletons became significantly smaller after people turned to agriculture, largely because they weren't eating as varied and balanced of a diet as their hunter-gatherer forerunners. Also, of course, living at a higher population density greatly increased epidemics. We were probably less likely to be eaten by wolves or leopards, though. The thing is that "progress," as a rule, doesn't necessarily make us any happier, which is the same as saying it doesn't necessarily cause us to be better off than before. It's practically an irreversible process, though, with stressed-out city people outcompeting peaceful rustics in the race for practical efficiency.

      Delete
    2. External progress is like running in a log rolling competition, one has all the appearance of running somewhere, all the effort, all of the fatigue, all of the satisfaction of going somewhere but one really doesn't go anywhere. That doesn't mean one can stop running because that would mean falling into the water.

      The wise apprehend it for the ridiculousness that it is, and look inside for real progress, and see the competition was imaginary.

      It was always so; someone always got sick and tired of the same old nonsense and screamed, is there another way out, and found the door to enlightenment.

      Delete
  2. "Progress" simply means there are more of us. Doesnt it?
    To the point now that we are restrained by the necessary imputs to make US.

    The same would be true of intelligence. How much time and space would it take to upload or download all the data that has gone through a human sensory system in a lifetime?

    So even we have to selectively remember the stuff. Without memory how can it be intelligent per se on our level? Even most of us cant remember stuff because our brains cannot store it.

    Will an AI robot have to spend 20 years at school first before it can do anything? This is something the AI boffins dont consider in all the stuff on this I have seen.

    Yes robots can learn and so do humans but it takes us a couple of years just to start with language and walking etc. Then about 10 more before we can program computers. It could be sped up if we didnt have to eat.

    Lets look at the skills we do possess. Take us all apart into pieces of sub routines. Going to take a few billion years to program the chemistry of a cell.

    We should be working maybe on programming humans to not engage in greed, anger and delusion. That seems a tough one even to someone who wants to be a better human.

    Grist for the mill.




    ReplyDelete
  3. I rode my motorbike into the Irrawaddy Delta yesterday. Along a bitumen vein of civilization towards the impassable barrier of the Ayyawaddy river. Out there in the miles upon miles of rice paddies, fish farms, ducks, chickens, goats, and cattle, there are a lot of people working hard and making progress. At the end at Bogale I found a new bridge has been built across the river to open up the road transport to another area.

    It is remote but there are phone towers and internet and some form of power out there even though limited.

    Different rice strains being tested and a whole system of connected life.

    No factories out there to make all the engine etc that are needed to keep everything going.

    Humans are pretty amazing to survive at all.

    And they smile and are poor and pretty happy.

    This is within a few miles of Yangon and stretches across the entire country. Villages, farms and Monasteries and Pagodas. The same pattern repeated again and again.

    Can AI replace this? Hardly think so.

    ReplyDelete