In which "utopia" would you like to live in?

Recommended Videos

Saelune

Trump put kids in cages!
Legacy
Mar 8, 2011
8,411
16
23
TakerFoxx said:
Addendum_Forthcoming said:
Yeah, I'm gonna be honest with you, that actually sounds horrifying. Like, that's the sort of thing I'd fight, die, and even kill to prevent. Which goes to show why utopias are an inherently flawed concept: one person's paradise is another person's living nightmare.
Well, the issue with Utopia's is that Utopias are perfect, but if a single person is genuinely unhappy, its not a perfect world.
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
TakerFoxx said:
Addendum_Forthcoming said:
Yeah, I'm gonna be honest with you, that actually sounds horrifying. Like, that's the sort of thing I'd fight, die, and even kill to prevent. Which goes to show why utopias are an inherently flawed concept: one person's paradise is another person's living nightmare.
That was kind of the point of my post. My utopia is going to be a nightmare for some. Well... nightmare for those outside the hivemind. Ego death for everybody else. The hivemind is peaceful... craves new stimuli. Shares it at the speed of a thought.

No more loneliness, fear, pain or madness. Just the burning desire to consume all other thought across time and space for eternity. We could even cultivate humanity on distant worlds... let them go unhindered with zero contact or knowledge of their creators... then swarm and gorge. Like a buffet every few millenia.

Hell ... maybe your impulse to fight it and stay an individual could be architecturally designed through cultural programming from an 'invisible hand' as to only to make the harvest more desireable and meaningful in the end. Though arguably the sudden influx of new minds and information would be more delicious if we didn't have any knowledge of its development.

Eventually the hivemind will be *so good* at human cultivation that we might be able to establish multiple 'colonies' everywhere and simply drift across the galaxy consuming, and then repopulating, each planet in turn. Designing multiple planetary habitats in all the spiral arms in due course, to partly design how cultures will emerge in the 10'000 year gap...

Thousands of years of differingly designed tech, culture and art ... consumed every few decades as the hivemind roams pre-cultivated worlds in a never-ending cycle of consumption. We are helping them as much as they are helping us. We are bringing them into the collective. We make them as if gods... whether they want it or not.

Hell ... my utopia even has physics going for it. Let's say we achieve .99 LS and populate worlds in our spiral arm of the galaxy. By the time we jump to a habitable world within 100 light years, 6000 years would have past, but only a hundred for us. Imagine consuming all Earthling minds with an accumulated knowledge of 6000 years with a base standard of starting with copper tools every 100 years?

Now that's what I call a hamburger!

If we terraform multiple star sysyems all within 100 light years of eachother... we can expand our menu considerable. Consume billions from a mostly arid world and its cultures that will spring from it. A verdant garden world paradise and the cultures on it. So on and so forth. As the hivemind grows ever larger and smarter we can start human colonies with more advanced tech and see how much further they get culturally, technologically, socially ... and ever more ripe for the picking.
 

MeatMachine

Dr. Stan Gray
May 31, 2011
597
0
0
I could see myself possibly living a content life in a technocratic, libertarian, or fascist utopia. None of which I would ever conceive of voluntarily making the necessary sacrifices to see it come true, but those are the societies that I would prefer to live in if things became irreversibly boned.

If pressed for settling on only one option, it would certainly be the technocratic utopia. Either I would excel as one of the scientific elite, or I would simply be brainwashed/programmed into being satisfied with a lower position. An ethically deplorable state to be in, but nonetheless, the only utopia in which everyone is at least superficially satisfied.
 

skywolfblue

New member
Jul 17, 2011
1,514
0
0
Technocratic Aristocracy and Land of Liberty are pretty horrifying. I love science and technology, but scientists are just as prone to evil as politicians are.

To give the elite "scientists" free reign without any kinds of checks or balances is... Well, it's what we're allowing politicians to do right now, and it's horrible.

------------

Where do people get the idea that a Religious Utopia would have:
- Lack of technology
- Culture/Art being homogeneous
- People being un-aspirational

The foundations of western science were built by people who believed in God. Our technology comes from that. To be Christian isn't to be anti-technology and anti-science, it's quite the opposite.

Culture and Art THRIVED under the Catholic Church.

Un-aspirational? Even the blandest parts of history in the Catholic Church are filled with people who stood out. People with big hopes and bright dreams.

So yeah, I'd take a Christian Utopia.
Even with the bad apples (some judgmental people).

------------

Addendum_Forthcoming said:
What about the posthuman ideal? All minds, all thoughts, all consciousness subsumed into an evergrowing, self-perpetuating hivemind with total ego death? If utopianism is merely about the prevention of pain, why not strive to recreate a posthumanity where all pain is shared, and thus given total impetus to reduce or remove entirely?
With the death of individuality, comes the death of love. I would fight that future tooth and nail.
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
I'd say Technocracy with a twist. The twist is that when my super advanced genius comes to power, I DO prevent the deplorable acts, since I would be doing this specifically to take away all reasons of strife. You can't war when everybody has nothing to complain about, and especially when I'm the manager of the machinery.
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
skywolfblue said:
With the death of individuality, comes the death of love. I would fight that future tooth and nail.
You don't have to join the hivemind. Surely the whole point isn't stopping people from becoming a single entity is that it is perhaps the last free choice you have to make. Though arguably the complete dominance of such an establishment, culturally, intellectually, socially, militarily ... the power of the hive would likely render all those outside with hardship. After all, zero crime, zero waste, perfected education, no death, no illness, no political strife, no corruption ... basically it would be akin to the greatest political system known to humanity and would likely come to dominate all systems of trade, diplomacy or warfare outside it.

No other institution would share information as quickly, as freely, as efficiently, as the hive. Plus it cures humanity of all its fears, its loneliness, its doubts, its aggression ... it's the path we've been moving towards since the birth of the information age, so the idea that people would fight it seems odd to me.

It would be like expecting a second Luddite rebellion as we further develop BCI. It's not as though it's going to be an instant change, either. Rather it would be incremental. We continue to develop neuroprosthetics to the point where the hive becomes simply the most cost effective and 'happiest' existence on offer. And then with continual exposure, to be without such close intimacy, that facing the harshness of an isolated existence outside the hive is akin to the physiological effects that occur when a long-term partner dies ... so much so leaving the system might actually cause death by 'heartache' (which is real thing, believe it or not ...)

Personjally, I look to the benefits. It's an existence without wants, with relatively few needs, it grants you access to all information available, immortality. Immunity to loneliness, to pain, to hardship, to responsibility. You just exist for the sake of whatever you want, whenever you want, however you want, wherever you want.

Seems preferable to all of the utopias above. Plus it removes us entirely from having to discuss politics at all. An existence that transcends the question of politics because it makes no illusion to have to cater to any one person's needs seperate from anothers.
 

Level 7 Dragon

Typo Kign
Mar 29, 2011
609
0
0
Addendum_Forthcoming said:
After all, zero crime, zero waste, perfected education, no death, no illness, no political strife, no corruption ... basically it would be akin to the greatest political system known to humanity and would likely come to dominate all systems of trade, diplomacy or warfare outside it.
No camping out in the woods and spending the night listening to the cracking of the firewood. No running outside during a winter morning and getting into a snowball fight with childhood friends. No making a fool of yourself in public when trying to flirt with a member of the opposite sex. No getting lost in an authumn forest and taking deep breaths of pinewood air to calm down. No hearing the squacking of the snow beneath you feet and the muffled hum of the highway away from you. No brother recieving a phonecall notifying them that they are now a father.

Some people would fight to their last breath to avoid the death of individual experiences.
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
Level 7 Dragon said:
No camping out in the woods and spending the night listening to the cracking of the firewood. No running outside during a winter morning and getting into a snowball fight with childhood friends. No making a fool of yourself in public when trying to flirt with a member of the opposite sex. No getting lost in an authumn forest and taking deep breaths of pinewood air to calm down. No hearing the squacking of the snow beneath you feet and the muffled hum of the highway away from you. No brother recieving a phonecall notifying them that they are now a father.

Some people would fight to their last breath to avoid the death of individual experiences.
For starters, why have any of that stuff when you can have 7.3 billion interpretations of other people having the same experience? The whole point of the hive is to closely knit all humanity into operating as if a single entity. The most efficient, the most intimate, the most peaceable manner. You're talking about existences whereby all information, including sensory information, becomes available.

Alternatively, why spend time camping out in the woods when you can experience all environments on Earth in simult?

Why go running around in the snow, when you can experience the most endearing memories of it from 7.3 billion perspectives?

Not only that, but you have to take into consideration all the stuff you couldn't experience without the hive. Space travel, for instance (realistically speaking, 99.9% of humanity will never know what this is like) ... instantly comprehending the deepest understandings of the universe without having to spend many decades in research, education and consultation with specialists across the globe. Knowing what it's like to give birth, knowing what it's like to fight in wars, knowing what it's like to grow the perfect rosebush and spend years of carefully pruning it to perfection ... all in a matter of moments.

Some people might fight back, but what the hive offers would also attract its defenders as well. It's also not a leap to conclude that unless without significant pre-implementation revolution that the hive, once it has enough minds, would win.

Railing against it would be like railing against your laptop. We're not going to give up modern computer science for some idea of 'protecting individuality'. Nor will we want to once computer science achieves what it has always been trying to do since the creation of modern information systems ... connect humanity in a manner never seen before.

(Edit) The hive and the posthuman ideal is the silver bullet to all human problems, including the problems of being human. The only reason why we cling to this idea that humanism is good is because we haven't yet developed the means to transcend the curse of an isolated, lonely, painful, self-destructive existence of individual choice. Whereby no matter what we do, our choices are still what define us, and ultimately hurt us and others.

Same demons. No matter where you are or how you live, your individuality gives rise to birthing your demons. All other benefits to the contrary are merely justifications as your miserable existence peters out and you've done naught but chase an unfulfillable self-ideal your increasingly battered sanity produces, as if you're realistically given the means to reach it. Your agency and freedom fails you, for it can do nothing but disappoint and frustrate your efforts to achieve something greater when it is ultimately meaningless, and worse, unobtainable.

This is why we tell ourselves; "Ambition is good" ... but we also pat ourselves on the back when we come to accept; "The grass is always greener... one should be realistic." To be 'successful' requires both thoughts, in tandem. Which ultimately do nothing but cause us a lack of sleep and never-ending frustration. You don't excel with your individuality, you settle with your odds with it and with the universe that cares naught whether you live or die.

If you don't settle with your individuality and your differences, you go mad. If you stricken yourself of individual differences, there's no need to settle. Given the right experiences of billions of people, the right desires, the right access to resources ... what you can accomplish would leave all other human innovation in the dust.

This is why I make the argument the hivemind and the posthuman ideal is ultimately going to out-contest any other political system, any other 'utopia', people can imagine.

If they were to run an experiment about trying to create a single entity from a group of human brains through some major advance of neuroscience, I'll put my name down on the list of test subjects. Break the cycle, fuck humanity ... let's create something better and far more alien to it. After that point the *only* desire is; "Consume more thoughts, consume more stimuli." Needless to say, a much easier and direct goal to work with than all the other piddling and overly complex dreams a human being has to live with in vain.

Also no less 'meaningful' than any piddling, overly complex aspiration of any human. It's also far more obtainable than any continually expanding dream or goal of any human. No one completes their 'mission' in life ... if you do then you'd likely kill yourself because you are empty. There is no driving force left to suffer the torments of life for. So why not make the goal much easier to obtain without it being the end to meaning?

See, when I was late teens I suffered a horrendous motorcycle accident. Significant TBI. When I eventually came out of coma I had to relearn how to do so much. The 'funniest' experience I remember of the year it took to get back on track was how much I knew I wasn't me anymore. I didn't feel the same way about friends I knew since primary school. I didn't feel the same way about food. Even some of my memories I continue, to this day, haven't yet bothered trying to knit back together. Rather they remain a jumbled mess of fragments and largely irreconcilable that I try not to think of them otherwise it leaves me confused and annoyed, and often befuddled and aimless for hours.

I *knew* I wasn't the same person. Completely different in numerous ways. Not simply through emotional trauma, but even so deep as fundamentals concerning cognition and physiological response to stimuli. Individuality is not inherent in so far that it is seperate from other people or without plasticity.

Given that during rehab I had to practically rely on 7 other people rebuilding my consciousness and reality for me (effectively), I honestly don't see what the big deal is.
 

veloper

New member
Jan 20, 2009
4,597
0
0
Addendum_Forthcoming said:
Level 7 Dragon said:
No camping out in the woods and spending the night listening to the cracking of the firewood. No running outside during a winter morning and getting into a snowball fight with childhood friends. No making a fool of yourself in public when trying to flirt with a member of the opposite sex. No getting lost in an authumn forest and taking deep breaths of pinewood air to calm down. No hearing the squacking of the snow beneath you feet and the muffled hum of the highway away from you. No brother recieving a phonecall notifying them that they are now a father.

Some people would fight to their last breath to avoid the death of individual experiences.
For starters, why have any of that stuff when you can have 7.3 billion interpretations of other people having the same experience? The whole point of the hive is to closely knit all humanity into operating as if a single entity. The most efficient, the most intimate, the most peaceable manner. You're talking about existences whereby all information, including sensory information, becomes available.

Alternatively, why spend time camping out in the woods when you can experience all environments on Earth in simult?

Why go running around in the snow, when you can experience the most endearing memories of it from 7.3 billion perspectives?
Consider Garrity's Law: The intellect of individuals in a group decreases exponentially as the number of individuals in the group increases.

My bet would be that the constant mental bombardment of idiotic ideas, random urges and feelings would paralyze even the smartest neurons in the human hivemind.
99% of everything is crap and we don't want everybody's perspective, we only want the best. When we must have a majority vote, let's just stick to just the vote count.
Not only that, but you have to take into consideration all the stuff you couldn't experience without the hive. Space travel, for instance (realistically speaking, 99.9% of humanity will never know what this is like) ... instantly comprehending the deepest understandings of the universe without having to spend many decades in research, education and consultation with specialists across the globe. Knowing what it's like to give birth, knowing what it's like to fight in wars, knowing what it's like to grow the perfect rosebush and spend years of carefully pruning it to perfection ... all in a matter of moments.
Why not just download the best, educational memories and experiences?

Speaking of download, I propose a superior post-human alternative to yours: why integrate with billions of inferior general purpose units, when you could integrate with a few dedicated and purpose built, artificial intelligences and databanks?
You set the goals and the AIs figure out the best way to go about it and what kind of results to expect.
You can also make voluntary links with others, but through filters and firewalls, so you no foolish crowd-ape shit can get through.
 

Level 7 Dragon

Typo Kign
Mar 29, 2011
609
0
0
Addendum_Forthcoming said:
In theory, that might work, but in practice human consciousness is far too complex to simply link together like lego blocks. Would all the assimilated minds have an equal voice in how the hive operates or will there be a single consciousness that would dictate what to do to the drones (if so, that would essentially be death). How would individuals with radically different outlooks of life mend together?

Personaly, I think that mending all mind together would actually halt progress because it would eliminate diversity of opinions. The same way humans need biodiversity to survive as a species (else it would take a single strain of virus that nobody has the immune system for to eliminate the entierty of civilization) we need the diversity of opinion to progress as a culture. Humanity would be handicapped in creation of new philosophies if every single human was a carbon copy of themselves. We would need to travel across the universe and hog the works of other species in order to progress, which doesn't sound right.

Plus, there are technical issues, such as the stability of the network. What would happend to a drone should they fall out of the hive? Should the network collapse, how would it affect the individuals on a biological and cognitive level?

Generally speaking, its best not to put all of your eggs in one basket. Imagine instead of having a single hive, having several that compete with eachtoher. Drones would be conquered and be able to defect. Even if several hives collapse and kill their inhabitants, a few of them would be different enough to avoid making the same mistakes.

Also, why is the destruction of free will a nessesary? Why can't the hive take over 10-20% of your cognitive power in exange for privilages such as unlimited knowledge and a backup body in case you get killed?
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
veloper said:
Consider Garrity's Law: The intellect of individuals in a group decreases exponentially as the number of individuals in the group increases.

My bet would be that the constant mental bombardment of idiotic ideas, random urges and feelings would paralyze even the smartest neurons in the human hivemind.
99% of everything is crap and we don't want everybody's perspective, we only want the best. When we must have a majority vote, let's just stick to just the vote count.
Right, but if we treat the posthuman ideal as if merely a brain with conventional cognition systems similarly to our own already, that's not really a likely occurrence. Memory isn't physically stored. So when confronted with stimuli, you have patterns of activity. Let's break it down. Let's assume by 'idiot brains' we have 'useless memories' in an individual person.

So let's say you're a biomedical researcher, but you know fuck all about visual arts. Does that mean that your input is idiotic when you look at a Picasso, and yet still indifferent to what you feel concerning a depiction of biological compound representations? You could also argue that the 'idiot' patterns of thought would, as in with individual people, change as they are exposed to more nuanced opinions. Education does uplift people to higher trains of thought. And I can't think of any other system that would allow for such a quick form of education.

Which I don't doubt people possess greater capacities of cognition through exposure to greater volumes of academic thought, I don't hold the inverse is true where people get dumber simply by exposure to very vocal, less educated people.


Why not just download the best, educational memories and experiences?
Well, that's effectively what you're doing.

Speaking of download, I propose a superior post-human alternative to yours: why integrate with billions of inferior general purpose units, when you could integrate with a few dedicated and purpose built, artificial intelligences and databanks?
You set the goals and the AIs figure out the best way to go about it and what kind of results to expect.
You can also make voluntary links with others, but through filters and firewalls, so you no foolish crowd-ape shit can get through.
Why not have both, though? Let's say you can upload and download information straight into your brain, using neuroprosthetics to experience, or tap all brains on the planet. Instantly feel and communicate with the sense data that another brain experiences, you're still going to have a form of ego-death. And not only that but assuming that the collective hivemind already has computational aids to enhance existing cognition systems, it's still going to defer to what the hivemind feels it wants rather than purely an examination of what an A.I. says it should have.

So I'm not sure where the idea of voluntary comes from. Voluntary assumes that the borders between one person's sense data is measureably, distinctly, different to one's own. Assuming that such a transmission was seamless enough, or that the abundance of 'external' information was so great that it would leave doubt, what you're saying is; "What's stopping individual thoughts from effectively self-harm?"

Well, ultimately nothing. Arguablythe hivemind is superior to simply an A.I. is because it dows allow for debate, rather than merely weighing up options. Which leads to further self-improvement when presented with unknown variables. And arguably what's stopping the idiot brains from being taken seriously is the fact that people already know what is the founding consensus if they wish to weigh in on the situation.

So effectively it would be like what we have now, the only difference is there isn't politics involved. Much like you had really, really smart scientists working for the tobacco lobby paid to poke holes in studies and for which less scientific minds could lift up and say; "Hey, tobacco ain't so bad for you."

For an individual brain, that might hold weight. For a hivemind, nobody would be paid off to do such things to begin with. Even if there were people being paid off to do such things, people would definitely know it was bullshit to begin with. Not only that, you could argue inversely that just having A.I. do all that, that individual people will question who coded them in the first place and what 'political motivations' they had for programming a machine to come to that conclusion. Let's say a U.S. tech firm designed an A.I. that said Russia should disarm all their nukes. No matter how honest such an assessment would be, Russia would still be foolish to do so and the U.S. more so for expecting Russians to listen to an A.I. that told them to do so.

Whereas the hivemind, well ... no need for worries. All sense data is yours and thus trust is no longer a limited commodity.

But ultimately... I like your idea. Definitely voluntary (as in membership), but I also don't see why you shouldn't aim to have both a hivemind and an A.I. if we're going that route of; "What's most effective?"
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
Level 7 Dragon said:
In theory, that might work, but in practice human consciousness is far too complex to simply link together like lego blocks. Would all the assimilated minds have an equal voice in how the hive operates or will there be a single consciousness that would dictate what to do to the drones (if so, that would essentially be death). How would individuals with radically different outlooks of life mend together?

Personaly, I think that mending all mind together would actually halt progress because it would eliminate diversity of opinions. The same way humans need biodiversity to survive as a species (else it would take a single strain of virus that nobody has the immune system for to eliminate the entierty of civilization) we need the diversity of opinion to progress as a culture. Humanity would be handicapped in creation of new philosophies if every single human was a carbon copy of themselves. We would need to travel across the universe and hog the works of other species in order to progress, which doesn't sound right.

Plus, there are technical issues, such as the stability of the network. What would happend to a drone should they fall out of the hive? Should the network collapse, how would it affect the individuals on a biological and cognitive level?

Generally speaking, its best not to put all of your eggs in one basket. Imagine instead of having a single hive, having several that compete with eachtoher. Drones would be conquered and be able to defect. Even if several hives collapse and kill their inhabitants, a few of them would be different enough to avoid making the same mistakes.

Also, why is the destruction of free will a nessesary? Why can't the hive take over 10-20% of your cognitive power in exange for privilages such as unlimited knowledge and a backup body in case you get killed?


Oh, there's shitloads to work out. And I'm talking about centuries of persistent examination of neuroscience. We're already working on bypassing the optic nerve and the corpus callosum to provide 'sight' to people with extensive damage to the eyes. All through directly chipping the visual cortex. So the idea of sharing sense data is not so science fiction as it may seem. There's a whole lot of hypotheticals involved, but I'd say we could do it eventually.

And definitely, you'd want multiple redundant systems. If one hive goes down, have others that can still boot up and take over, or have multiple systems designed as if a forum with multiple segmented operations where individual impulses can be satisfied to lower 'chatter' ...

And it's not like free will is destroyed. Assuming you're a libertarian (classical), free will is a metaphysical constant. Just that freedom is not inherently good. All of us are broken by our choices. Let's say you are in a Stalinist work camp ... you still have your free will ... you still have your 'freedom' ... you can continue to labour in the ice for a pittance, half-starved and fatigued. You can try to brain that patrolling soldier and take your chances in the wild. You can refuse to work and be shot. Numerous choices, but none of them without consequence.

Ultimately, your free will makes demands that you must choose, even if you choose to do nothing or do not wish to. Your freedom to choose is not inherently good, and it is more of a curse than anything. If there was a system where you didn't have to choose anything you didn't wish to, that some miracle 'Divine Giver' never asked of you for anything, never stopped providing everything you ever want, never made you feel helpless, cold, frightened, or lonely ... surely such would be paradise, no?

If you want a utopia, you want to limit (involuntary) pain. If you want to limit pain, limit the burden of choice. Ultimately, joining the hivemind is likely the last freedom of choice you'll wish to indulge in, as you're making that choice to be free from the burden of making more.

I agree that 'culturally' speaking, humanity would be handicapped. Basically the driving force of the entirety of posthumanity would be driven by a desire to consume any and all new stimuli in so far that it doesn't challenge the interconnectedness of the hive. So philosophy would be kneecapped, because what is the point for people to question the idea of metaphysics when humanity has created their own universes of thought to explore in a world where the human body barring the central nervous system is a wasteful thing to maintain. But arguably we're already going down this route to begin with and ultimately it's the only thing that will link all humanity together peacefully.

Like the example of globalization and the consumerism of culture, and the growing peacefulness of the world and the breaking down of cultural barriers through international trade. Breaking down beliefs, arts and crafts, into bite-sized consumer chunks for other cultures to indulge in and give it solely a monetary value in its uptake. For example, I've recently been taken with Indian fashion. I'm not interested in replicating it as a artistic pursuit, I'm not interested in the cultural reasons of Indian fashion emerging, or its history. I just think a sari looks lovely for elegant formal wear for someone approaching their mid-30s.

Purely consumer reasons for wearing them occasionally. No other reason. Don't care that I don't really know everything about them, either. My attachment is purely on the basis that I can afford them and I think they look lovely. But ultimately the growing consumerism of cultural icons will lead to this idea of the cosmopolitan whole. The idea of breaking down cultures into facile tastes rather than dogmatic structures immune to social forces changing them into something else.

This idea can be expanded into the temporal, with artists like Whiteley who believed like many Australian artists that artwork should be destroyed after so many years in order that the visual arts better reflect the mindset of a generation and the world it exists in. That beauty is an evolution of perspective, not of a moment captured and made immemorial solely through pandering nostalgia and idolization of artistry as if something to preserve without respects to the world they inhabit.

Suffice it to say, I think our idea of artistry would evolve, but much like Whiteley's idea it would be crafted experiences to indulge in that have nothing beyond a very temporal edge to it. We'd be eternally hungry for it. We'd consume it like food and place equal indulgence in how art is made, not simply the final product (the feel of the canvas under brush, etc), and treat it as if no different from any other new sensation to indulge in.

Now arguably there is the thought that visual arts should be viewed like that ... something to indulge in, to consume, not to treat with reverence anymore than that perfect medium rare sirloin steak.

But I don't doubt for a moment that our idea and treatment of art might transform it into less than academic, and more so about pushing the right buttons in as many brains as possible. Which other artists have done so in the past, however. Andy Warhol had scores of young artists in a factory working for him ... he turned artwork into a consumer item.

Is this bad? Well .... ehhh .... maybe? To be honest, is it any different to other forms of visual art nowadays?
 

Qizx

Executor
Feb 21, 2011
458
0
0
skywolfblue said:
Where do people get the idea that a Religious Utopia would have:
- Lack of technology
- Culture/Art being homogeneous
- People being un-aspirational

The foundations of western science were built by people who believed in God. Our technology comes from that. To be Christian isn't to be anti-technology and anti-science, it's quite the opposite.

Culture and Art THRIVED under the Catholic Church.

Un-aspirational? Even the blandest parts of history in the Catholic Church are filled with people who stood out. People with big hopes and bright dreams.

So yeah, I'd take a Christian Utopia.
Even with the bad apples (some judgmental people).
Don't think too hard about these, it's the same reason the technology one has rapist leaders. The Divine one has literal God powers, and the workers bicker about unimportant stuff. They all needed downsides and those are some downsides that have come about because of religion. Culture and art has thrived under religion, it has also stagnated and caused horrible situations.

Personally? They all suck. I'd create my own utopia, with blackjack and hookers.
 

veloper

New member
Jan 20, 2009
4,597
0
0
Addendum_Forthcoming said:
veloper said:
Consider Garrity's Law: The intellect of individuals in a group decreases exponentially as the number of individuals in the group increases.

My bet would be that the constant mental bombardment of idiotic ideas, random urges and feelings would paralyze even the smartest neurons in the human hivemind.
99% of everything is crap and we don't want everybody's perspective, we only want the best. When we must have a majority vote, let's just stick to just the vote count.
Right, but if we treat the posthuman ideal as if merely a brain with conventional cognition systems similarly to our own already, that's not really a likely occurrence. Memory isn't physically stored. So when confronted with stimuli, you have patterns of activity.
This is a good clarification for your hivemind, as the human brain, though also somewhat flexible in case of accidents, is organized into parts with specific functions and has a central stream of consciousness. I won't need to ask how those human minds can cooperate without organization now.

It does raise a couple of new questions:

1 neurons in the human mind connect only through their closest neighbors; does the human hivemind work on the same principle and if so: do the outlying individual units also tap into the central stream of consciousness and how?

2 for quick reactions (like when touching something scalding hot) parts of the human mind may bypass the central stream of consciousness and the brain may also block or suppress functions depending on what is currently needed: how does this work in the hivemind and how would the units experience this?

3 how would a unit receive one of the more desirable functions, such as joining the hivemind's equivalent of the hypothalamus?

4 would altering humans to make them more suitable for their specialized functions be an option?


Let's break it down. Let's assume by 'idiot brains' we have 'useless memories' in an individual person.

So let's say you're a biomedical researcher, but you know fuck all about visual arts. Does that mean that your input is idiotic when you look at a Picasso, and yet still indifferent to what you feel concerning a depiction of biological compound representations? You could also argue that the 'idiot' patterns of thought would, as in with individual people, change as they are exposed to more nuanced opinions. Education does uplift people to higher trains of thought. And I can't think of any other system that would allow for such a quick form of education.

Which I don't doubt people possess greater capacities of cognition through exposure to greater volumes of academic thought, I don't hold the inverse is true where people get dumber simply by exposure to very vocal, less educated people.
My guess would be that the hivemind would know art is not the biomedic's area of expertise and they wouldn't care about it's experiences personally either (except maybe for the dozens of units linked directly to it) so suppress or ignore anything coming out of that cluster on this subject matter.

Why not just download the best, educational memories and experiences?
Well, that's effectively what you're doing.
The way I see it, all the best available knowledge would indeed be there within the hivemind, along with all the worthless crap, but the bandwidth of the individual human brains is still very limited and billions of queries of individual units to other units, cannot all have priority.

More efficient would be for a central leader cluster to be allowed to ask anyone (usually by the whole cluster) about anything and this way direct a stream of consciousness and then the for lesser clusters to have their limited bandwidth filled up only with a few channels, somewhat similar to educational TV channels and a few phone lines to their direct neighbors or cluster.

How do you see this?

Speaking of download, I propose a superior post-human alternative to yours: why integrate with billions of inferior general purpose units, when you could integrate with a few dedicated and purpose built, artificial intelligences and databanks?
You set the goals and the AIs figure out the best way to go about it and what kind of results to expect.
You can also make voluntary links with others, but through filters and firewalls, so you no foolish crowd-ape shit can get through.
Why not have both, though? Let's say you can upload and download information straight into your brain, using neuroprosthetics to experience, or tap all brains on the planet. Instantly feel and communicate with the sense data that another brain experiences, you're still going to have a form of ego-death. And not only that but assuming that the collective hivemind already has computational aids to enhance existing cognition systems, it's still going to defer to what the hivemind feels it wants rather than purely an examination of what an A.I. says it should have.
I guess you could, but suppose you had a network of many IBM compatible 8088s and 80286s and also a couple modern core-i7 PCs? Maybe you'll keep one or two oldies around to play with sometimes for nostalgia, but you can disconnect and shut down all those old boxes without sacrificing any significant computational power, while saving a lot on the energy bill.

Those humans in the hivemind are already reduced to less than cogs in a machine and if AIs in that same machine are way more advanced, then the logical conclusion is to stop using the humans.

Now have instead one or a few humans in the network and they will retain their uniqueness and may even serve a purpose as to be the squishy interface for the network to the physical world and to give the AIs purpose through their emotions. The human might then get to keep it's director's seat, or at least gets to stay on the party fun commission.
So I'm not sure where the idea of voluntary comes from. Voluntary assumes that the borders between one person's sense data is measureably, distinctly, different to one's own. Assuming that such a transmission was seamless enough, or that the abundance of 'external' information was so great that it would leave doubt, what you're saying is; "What's stopping individual thoughts from effectively self-harm?"
Actually I was approaching the links from the idea of different competing networks, each one a cyborg of a closed network consisting of one (or maybe a few) human and many AIs. Networks may not always trust each other, but cooperation would still be the sensible thing to do in most cases. There might even exist entirely human networks in this scenario (if the balance of power happens to turn out that way). That's where the need for voluntary links and many protection measures would come from.

Well, ultimately nothing. Arguablythe hivemind is superior to simply an A.I. is because it dows allow for debate, rather than merely weighing up options. Which leads to further self-improvement when presented with unknown variables. And arguably what's stopping the idiot brains from being taken seriously is the fact that people already know what is the founding consensus if they wish to weigh in on the situation.

So effectively it would be like what we have now, the only difference is there isn't politics involved. Much like you had really, really smart scientists working for the tobacco lobby paid to poke holes in studies and for which less scientific minds could lift up and say; "Hey, tobacco ain't so bad for you."

For an individual brain, that might hold weight. For a hivemind, nobody would be paid off to do such things to begin with. Even if there were people being paid off to do such things, people would definitely know it was bullshit to begin with. Not only that, you could argue inversely that just having A.I. do all that, that individual people will question who coded them in the first place and what 'political motivations' they had for programming a machine to come to that conclusion. Let's say a U.S. tech firm designed an A.I. that said Russia should disarm all their nukes. No matter how honest such an assessment would be, Russia would still be foolish to do so and the U.S. more so for expecting Russians to listen to an A.I. that told them to do so.

Whereas the hivemind, well ... no need for worries. All sense data is yours and thus trust is no longer a limited commodity.
The hivemind does solve the issue of trust, internally within that hive at least, so that is an advantage you have. Cyborgs no matter how intelligent, may still choose to harm each other, if the risk/reward is favorable.

Then again, suppose some humans in the hivemind may be sacrificed for the greater good, then some cyborgs ending on the scrapheap isn't the end of the world either.

A fairer type of competition between cyborgs doesn't have to be a bad thing though. Each may still approach a certain problem from a different angle in relative isolation, completing unconventional trains of thought to their conclusion, without getting shot down prematurely.

One problem the hivemind has to resolve, is how to avoid becoming an echo chamber. This might be possible by giving some clusters some level of independence, but maybe there are more interconnected solutions too.

But ultimately... I like your idea. Definitely voluntary (as in membership), but I also don't see why you shouldn't aim to have both a hivemind and an A.I. if we're going that route of; "What's most effective?"
It all depends on my initial assumption of future AIs vastly outperforming humans at some point. Both hiveminds and cyborgs might just become futile attempts to keep humans around for something.

Well, that and there's one other thing that the cyborg has. It's nice to be the boss of a great outfit, even when your subordinates are much smarter than you.
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
veloper said:
This is a good clarification for your hivemind, as the human brain, though also somewhat flexible in case of accidents, is organized into parts with specific functions and has a central stream of consciousness. I won't need to ask how those human minds can cooperate without organization now.

It does raise a couple of new questions:

1 neurons in the human mind connect only through their closest neighbors; does the human hivemind work on the same principle and if so: do the outlying individual units also tap into the central stream of consciousness and how?

2 for quick reactions (like when touching something scalding hot) parts of the human mind may bypass the central stream of consciousness and the brain may also block or suppress functions depending on what is currently needed: how does this work in the hivemind and how would the units experience this?

3 how would a unit receive one of the more desirable functions, such as joining the hivemind's equivalent of the hypothalamus?

4 would altering humans to make them more suitable for their specialized functions be an option?
First off, I was planning to reply sooner but I had a migraine and spent the last 18 hours hiding from light under a blanket. So, sorry for the delay.

1: For starters assuming there was a medium to begin with this might not be an issue. For instance a biological or artificial devices that substitutes the natural connections we make with sensory information. Keeping the system as organic as possible to the minds it incorporates would be preferable. I'm thinking a 'brain bank', mostly.

2: Well, arguably the whole idea of autonomous units kid of flies in the face of what makes the hivemind useful. In the same way we can realistically expect remote devices can be operated by minds without direct conventional connectivity (such as various 'cybernetic' experiments in monkeys) ... you could circumvent the necessity for needing a body.

3: Easy, don't have autonomous units. Why maintain wasteful human bodies when you don't need to?

4: Absolutely. Assuming that we advance BCI to the point where you can seamlessly transfer sensory information we could simply create units for which can be digitally accessed. So you could have 'bodies' any way you want. Disposable robots or partial biological constructs as shock troops.

My guess would be that the hivemind would know art is not the biomedic's area of expertise and they wouldn't care about it's experiences personally either (except maybe for the dozens of units linked directly to it) so suppress or ignore anything coming out of that cluster on this subject matter.
Depends ... does the experience of it have benefits to entertaining the hive? Or making the hive 'happier' and more effective?

The way I see it, all the best available knowledge would indeed be there within the hivemind, along with all the worthless crap, but the bandwidth of the individual human brains is still very limited and billions of queries of individual units to other units, cannot all have priority.

More efficient would be for a central leader cluster to be allowed to ask anyone (usually by the whole cluster) about anything and this way direct a stream of consciousness and then the for lesser clusters to have their limited bandwidth filled up only with a few channels, somewhat similar to educational TV channels and a few phone lines to their direct neighbors or cluster.

How do you see this?
Right, but the argument you could make is; "How is this different to what we have now? Or indeed, without hivemind or even with A.I., how would it be different?"

The hivemind allows all minds access to all information if they choose to weigh into the argument. So arguably you could simply have it designed by solely which brains people trust on a matter. Given there's no longer any real separation, to do anything else is akin to self-harm. We currently take on board ideas of the universe and the self on the basis of numerous interconnected ideas. The hivemind cuts through all that 'noise' to begin with.

The better question is, how do human brains as they are manage not to become idle or confused when confronted by the stream of different ideas it is confronted with? All without the benefit of not being able to directly tap into all information available on it?

I guess you could, but suppose you had a network of many IBM compatible 8088s and 80286s and also a couple modern core-i7 PCs? Maybe you'll keep one or two oldies around to play with sometimes for nostalgia, but you can disconnect and shut down all those old boxes without sacrificing any significant computational power, while saving a lot on the energy bill.

Those humans in the hivemind are already reduced to less than cogs in a machine and if AIs in that same machine are way more advanced, then the logical conclusion is to stop using the humans.
I don't see how, however. The human minds inducted into the hive aren't 'reduced' ... they're made better. No class, no poverty, no corruption. You have to take into account the hivemind is not merely about making people smarter. It's about making people happier ... more content ... more productive. I also query that human minds are somehow lacking in comparison to some hypothetical A.I. For starters, only the hivemind knows what it's like to be in the hivemind. A.I. can benefit the existence of the hivemind, but the hivemind is the meaning of the hivemind.

Plus there are numerous reasons why the hivemind is superior to solely only A.I. systems, in that it is the greatest example of biological complexity. There's no point to a computer being made eternal.

Now have instead one or a few humans in the network and they will retain their uniqueness and may even serve a purpose as to be the squishy interface for the network to the physical world and to give the AIs purpose through their emotions. The human might then get to keep it's director's seat, or at least gets to stay on the party fun commission.
You can supplant this by simply having 'unutilised' human assets outside the hivemind, however. Basically cultivated humanity ... then the hivemind plucks a few minds every few decade to provide a buffet of (then) alien thoughts seperate to it. This is kind of why I made the joke before that even physics is on my side. We could dump humans on a terraformed planet 100 lya, develop safe .99 LS travel, travel back to a formerly colonised world, 12,000 years of social evolution for the colonising humans, 200 years for the hivemind. One 12,000 year planetary wide buffet every 100 years.

12,000 years of complex advances made, of history, of culture, every 100 years for us. All those delicious (Alien) minds to consume :D

You won't get that with an A.I.

Plus the hivemind provides unity. A reason to make friendly contact with other hives. A reason to desire contact with other posthumanity once we become an interstellar empire. If one planet developed an A.I. designed to promote the best opportunities solely from a group of individuals ... that might determine that the best opportunities for a planet of individual humans facing the harvest of minds to rebel or design countermeasures to ensure that the culture that birthed it can maintain their own idea of power ...

The hivemind would determine such individuality is unnecessary and counterintuitive to the immensity of benefits of continued unity. Even if the A.I. is operating purely on a numbers game of what's best for the individual people that built it (or for its own benefit facing imminent destruction), the hivemind offers what's best for the hivemind irrespective of the individual goals of clueless human cattle colonised somewhere without knowing why they were colonised in the firstplace (basically for the hivemind to consume them after a set number of years).

This is particularly important as different hives harvest different collections of worlds all before the 100,000 year 'unification' of all hives at some central point in the cosmos.

Basically the hive offers peace and stability across many millenia of separation. Something that you won't get if A.I.s are tied to the moral, cultural, and social good of the individual cultures that birthed them.

Actually I was approaching the links from the idea of different competing networks, each one a cyborg of a closed network consisting of one (or maybe a few) human and many AIs. Networks may not always trust each other, but cooperation would still be the sensible thing to do in most cases. There might even exist entirely human networks in this scenario (if the balance of power happens to turn out that way). That's where the need for voluntary links and many protection measures would come from.
You could facilitate this with just having a number of hiveminds. If only for the sake of defence, having hiveminds in numerous places. There would be the impetus to regularly communicate (for the consumption of new ideas) ... but it would help to ground the hivemind network into meeting all material needs across the planet, without sacrificing interconnectedness and productivity.

The hivemind does solve the issue of trust, internally within that hive at least, so that is an advantage you have. Cyborgs no matter how intelligent, may still choose to harm each other, if the risk/reward is favorable.

Then again, suppose some humans in the hivemind may be sacrificed for the greater good, then some cyborgs ending on the scrapheap isn't the end of the world either.

A fairer type of competition between cyborgs doesn't have to be a bad thing though. Each may still approach a certain problem from a different angle in relative isolation, completing unconventional trains of thought to their conclusion, without getting shot down prematurely.

One problem the hivemind has to resolve, is how to avoid becoming an echo chamber. This might be possible by giving some clusters some level of independence, but maybe there are more interconnected solutions too.
Easy ... if anything, I think the hivemind would make, say, the sciences LESS of an 'echo chamber' (In that no longer do you need to be only a small segment of the community to participate in the dialogue). For example, all minds suddenly comprehend the hugeness of the universe, and has direct connection to the thoughts of all scientists incorporated into the hive. So no longer do you get uneducated people screaming; 'Nerd!' If you can tap into the pleasure that a scienctist gets in completing a new theorem, or directly tap into the sense of satisfaction in working out the mysteries and applications of science ... then you might find that the right processing power (i.e, more brains) plied to the matter of achieving more 'pleasureable science'.

See, the great thing about the human brain is tat nearly all of them can be built up. It has plasticity built right into its make up. Evolution of thought happens as per exposure to the universe, not simply through coding its expression.


It all depends on my initial assumption of future AIs vastly outperforming humans at some point. Both hiveminds and cyborgs might just become futile attempts to keep humans around for something.

Well, that and there's one other thing that the cyborg has. It's nice to be the boss of a great outfit, even when your subordinates are much smarter than you.
But I don't see why you can't have both. For starters, why have cyborgs ala Deus Ex when you can have designer robotic and biological constructs you can just remotely interact with. Where by you can feel through it? Think about it ... the same technologies that would allow a Deux Ex style future, are the same technologies that allow us to simply command remote bodies without needing one of our own at all.

And the hivemind offers that existence just as well as any other, better in fact.

The hivemind is going to be a cybernetic one. It's just also going to be a biological constructed one as well.

I also doubt that A.I. can supplant something like an organic entity in many numerous fashions. Not only that, arguably it's easier to just build new organic brains into a neural network of ever expanding universes of thought as part of the hive. The human brain is still far more complex than compouters now by the same size and same energy cost.

Not only that, just like the human brain, the hivemind is far less weaker to external damage or corruption of information than an A.I.

Its needs are fewer, also. For example ... a hive could keep itself alive just by processing organic matter and oxygen. It can also self-cannibalize, shut down metabolism in an emergency, reboot back up later (albeit with extensive memory loss as we see with numerous cases of people being brought back from death). Biological networks would operate better in numerous situations ... particularly in places which are devoid of conventional energy sources such as solar. Offer numerous biochemical options of the transmission of information, can already be directly keyed into human thought and sensation, doesn't require adaptive programming, is self-healing, and the problems can be quickly diagnosed.

The assumption of A.I. supplanting biological systems of information processing is simplifying way too much just what your central nervous system is capable of doing. There's a good argument in there that complex reasoning over the multitudinal facets of the human experience would require a computer that is ultimately far less efficient than merely a whole bunch of brains and spinal cords in jars, and adaptive biological vessels.

Sure you can get a supercomputer to beat a person in chess, you can't get a supercomputer to beat an complex animal being a complex animal. The whole 'sum of its parts' comes into play when you're talking about sapience. Plus the hivemind can simply make itself faster, smarter, better ... simply by adding new biological matter. So why would the hivemind want to do anything but incorporate both biological and bionic options?

It's not about getting humans to compete with machines, it's about getting humans to be more than human. It's not simply about getting the hvemind to just do what a computer says, it's about getting the hivemind to be able to consider why it should in the first place.
 

Level 7 Dragon

Typo Kign
Mar 29, 2011
609
0
0
Addendum_Forthcoming said:
The hivemind is going to be a cybernetic one. It's just also going to be a biological constructed one as well.

I also doubt that A.I. can supplant something like an organic entity in many numerous fashions. Not only that, arguably it's easier to just build new organic brains into a neural network of ever expanding universes of thought as part of the hive. The human brain is still far more complex than compouters now by the same size and same energy cost.

Not only that, just like the human brain, the hivemind is far less weaker to external damage or corruption of information than an A.I.
First of all, I hope your migranes will subside and you'll get well soon. Second, you seem to assume that once human minds become interconnected that they will automatically reach for the most "rational" solution to a problem given they have the maximum ammount of information available. However, humans by their nature tend to avoid logical solutions to issues based on a variety of factors, from their personal feelings to cultural taboos that are present in their society. For example, South Korean airoplane co-pilots refuse to correct their partner's flight tragectory if they are older and higher status than them, which leads to crashes. After all, the vast majority of the world's population is deeply religious, how do we know that their voices don't drown out the minds of atheists and materialists?

How do we know that the hive will pick ideas that are rational as oppose to ones that have the most emotional appeal? Will cultural norms and traditions be transported into the network and survive there?

Human mind isn't just rational thought and decision, it is also the biological instincts and psysiology that influence them. What would happend to the human libido, will it be erased or be expressed in a different way? Sigmund Freud believed that human instincts to fight and to reproduce are not opposed to socialization and character building, but is something nessasary for such processes to exist. In other words, if all the superego's of the planet converge, what would happend to the Id?

After all, the the internet is a rather primitive "hivemind" which was proven to combine the observational powers and knowledge of different individuals to combat problems fairly effectively. For example, see the the red balloon experiment [https://en.wikipedia.org/wiki/DARPA_Network_Challenge] which was conducted by DARPA. However, those same processes have been known to start riots, hunt down innocent individuals and induce mass hysteria that eclipces the War of the Worlds hoax on a daily basis.

If both individual humans and entire societies are known to override their higher reasoning and rely on emotions and instinct in difficult time, how do we know that the hive might not go into hysteria or panic if something happends to it and do something dangerous or self-destructive?
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
Level 7 Dragon said:
First of all, I hope your migranes will subside and you'll get well soon. Second, you seem to assume that once human minds become interconnected that they will automatically reach for the most "rational" solution to a problem given they have the maximum ammount of information available. However, humans by their nature tend to avoid logical solutions to issues based on a variety of factors, from their personal feelings to cultural taboos that are present in their society. For example, South Korean airoplane co-pilots refuse to correct their partner's flight tragectory if they are older and higher status than them, which leads to crashes. After all, the vast majority of the world's population is deeply religious, how do we know that their voices don't drown out the minds of atheists and materialists?

How do we know that the hive will pick ideas that are rational as oppose to ones that have the most emotional appeal? Will cultural norms and traditions be transported into the network and survive there?

Human mind isn't just rational thought and decision, it is also the biological instincts and psysiology that influence them. What would happend to the human libido, will it be erased or be expressed in a different way? Sigmund Freud believed that human instincts to fight and to reproduce are not opposed to socialization and character building, but is something nessasary for such processes to exist. In other words, if all the superego's of the planet converge, what would happend to the Id?

After all, the the internet is a rather primitive "hivemind" which was proven to combine the observational powers and knowledge of different individuals to combat problems fairly effectively. For example, see the the red balloon experiment [https://en.wikipedia.org/wiki/DARPA_Network_Challenge] which was conducted by DARPA. However, those same processes have been known to start riots, hunt down innocent individuals and induce mass hysteria that eclipces the War of the Worlds hoax on a daily basis.

If both individual humans and entire societies are known to override their higher reasoning and rely on emotions and instinct in difficult time, how do we know that the hive might not go into hysteria or panic if something happends to it and do something dangerous or self-destructive?
Thanks! Yeah, Sumatriptan works about 60% of the time if I take it within 10 minutes of seeing the light fairies, but not this time D:

.... Oh ... and I definitely hope the hivemind is the least bit rational. I would hope its desire to consume more stimuli leads it to gamble, to chase dreams of growing larger, and yes... even the capacity to inflict harm when it feels the need to. The idea of the hivemind as a utopian ideal is no different in many respects to the ones that you listed above. There is some cost to self or others.

To address your first, fourth, and fifth paragraphs I point to the nature of how cosmopolitan societies are shaping now. While humans are not rational creatures (nor would I argue reason to be the seat of knowledge) ... we can and do operate cities with many millions of people with differing beliefs. And of course when you're talking about bodiless thoughts without deviation of form I think it would end the various aspects of respect for authority or the diffusion of guilt thst we see with your example of the pilots (or the Milton experiments, or the terrible case of Kitty Genovese) ...

Arguably every major city is akin to the old Biblical verses about the city of Babel. Incommunicability as the tower to Heaven fall. The point is in the megacities of the world we sacrifice strong personal attachments for the sake of greater homogeneity and a melange of peaceful coexistences through unbridled consumption of culture. Transforming once impenetrable dogma into less meaningful tidbits of consumerism with which to manufacture facile deviations of personal, rather than social-wide, expression.

You could not have a modern New York in 16th century Castile. But is this *bad* that people are willing to sacrifice sttong attachments (and thus willing viciousness) for the sake of a multicultural future homogeneity of humanity?

While people are not rational beings, I do think that this is separate from the issue of creating the hive. It may take cdnturies, but I think this idea of the desth of dogma will allow us to reach planetary cosmopolitanism. And once you schieve that, regardless of the reasonableness of humans we will have already created the systems to simply integrate seamlessly the boundless expressions of human interest into a singular whole. If the differences between us are rendered facile, then they can be accommodated for peacefully. This seems to be the path humanity is taking now.

Globalization is accelerating this trend, and ironically the systems of capitalism that demand greater self-ptoductivity will also demand humans enter 'hive' like structures to improve their capacity to create. Even in menial labour, everybody needs a mobile phone nowadays. It's a matter of you being competitive so it is enforced upon you to have one. Soon that will be an MMI embedded chip that allows you to remotely connect with devices to improve labour efficiency ... and so on. Even something as simple as no longer needing someone to help guide a forklift into place ... or completely and utterly destroying the idea of the travelling consultant through direct neural messaging and remote 3D construction.

Humans may not be rational, but the systems we rely on will need to still cater for as many as possible. So I see the hivemind as the end product of a long list of social applications of BCI and MMI technologies we are currently having leaps and bounds with.

To answer the idea about the ego, superego and the id, I legitimately do not have an answer. I can argue that if given the opportunity I would sacrifice autonomy, a Campbell style egodeath and transition into something without human equivalency, if it meant the end of the burden of choice and I doubt I would be alone on the matter. I can't think of any legitimately acceptable comparisons when we're talking about the death of the individual. I would imagine it would be an id like existence. Or arguably if they had some form of isolational partition, then it would ne akin to a form of augmented reality where it's a journey through numerous universes of space, time, and of alienesque thoughts absorbed to the point where it becomes as familiar as your own limbs.

There are examples of ego loss through torture and drug use. White torture, for instance. Whereby the effective loss of self can be extremely pronounced. TBI (for which I can personally attest for) often requires rebuilding your reality one step at a time. Knowing you are not yourself anymore. It's hard to explain, but it is possible to reconstruct your sense of self and your reality but it requires hard work, patience, and terrible pain.

Point is that individuality is plastic. It can be destroyed and rebuilt. If it can be destroyed and merely remains as a concept of being and limitless potentisl for expression ... then it can (also) remain an endless hunger instead. This was my experience during rehabilitation for a severe motorcycle accident.

I wasn't me. I *knew* I wasn't me. I both felt and didn't feel the same about people I knew my entire life. Some I was drawn to without knowing why, some people I hated and felt conflicted for why I did so. Half my 'memories' remain a tangled mess of fragments that even I know can't be true (even though memory is pretty shitty to begin with) ... point is, I find far too many people talk about selfhood ... not enough talk about the plasticity of the brain ... and power that can be found in rebuilding your reality through active, painful re-examination and reconstruction. I find too many people are far too attracted to their practiced differences, what interests me is the capacity by which people have the potential of tearing apart their lives, who they think they are, and building themselves anew.

If we can transform 'individuality' into something of a flux ... the idea that it is malleable, that it should change and evolve, and undeserving of protection or being allowed to stagnate or be seen as sacrosanct and given limitless means of beimg ever consumed and consuming, then I think we're half way there already. Then we just have to wait for the technology.

Ironically, the hivemind may be the most liberating experience imaginable... as it guarantees the death of belief in favour of the endless consumption of the novelty of all potential. The reason why you have the status quo is because it maintains the power dynamic to limit suffering to places of its preferable investment. When you have a status quo that need not bother with this investment of hardship, then there is no impediment, desire, or benefit for any mind within the hive to seek it unnecessarily given it does naught but self-harm all within.
 

springheeljack

Red in Tooth and Claw
May 6, 2010
645
0
0
I would probably pick Technocracy because fuck it at least they have space travel. There is also potential to go rouge and become a space pirate which has always been my life's dream. Also I'm sure that weird immoral sex stuff would happen in any of those utopias
 

CM156_v1legacy

Revelation 9:6
Mar 23, 2011
3,997
0
0
Conservative utopia, easily. That is, depending on which religious tradition it's based off of. If it's my own (or something similar), I think it'd be great. However, if it were something else, then no, I wouldn't want to live in it. Or any of the other societies. Maybe Ancap if I had to choose.
 

Level 7 Dragon

Typo Kign
Mar 29, 2011
609
0
0
CM156 said:
Conservative utopia, easily. That is, depending on which religious tradition it's based off of. If it's my own (or something similar), I think it'd be great. However, if it were something else, then no, I wouldn't want to live in it. Or any of the other societies. Maybe Ancap if I had to choose.
Concervative utopia seems like a nice place to live for one of my devout friends, though there are a few pretty big concerns. People can change their beliefs throughout their lifetime: either convert to a different sect or lose faith entierly, which is a problem in a theocracy.

Plus, would't everybody having the same belief system lead to a massive echo chamber?