veloper said:
This is a good clarification for your hivemind, as the human brain, though also somewhat flexible in case of accidents, is organized into parts with specific functions and has a central stream of consciousness. I won't need to ask how those human minds can cooperate without organization now.
It does raise a couple of new questions:
1 neurons in the human mind connect only through their closest neighbors; does the human hivemind work on the same principle and if so: do the outlying individual units also tap into the central stream of consciousness and how?
2 for quick reactions (like when touching something scalding hot) parts of the human mind may bypass the central stream of consciousness and the brain may also block or suppress functions depending on what is currently needed: how does this work in the hivemind and how would the units experience this?
3 how would a unit receive one of the more desirable functions, such as joining the hivemind's equivalent of the hypothalamus?
4 would altering humans to make them more suitable for their specialized functions be an option?
First off, I was planning to reply sooner but I had a migraine and spent the last 18 hours hiding from light under a blanket. So, sorry for the delay.
1: For starters assuming there was a medium to begin with this might not be an issue. For instance a biological or artificial devices that substitutes the natural connections we make with sensory information. Keeping the system as organic as possible to the minds it incorporates would be preferable. I'm thinking a 'brain bank', mostly.
2: Well, arguably the whole idea of autonomous units kid of flies in the face of what makes the hivemind useful. In the same way we can realistically expect remote devices can be operated by minds without direct conventional connectivity (such as various 'cybernetic' experiments in monkeys) ... you could circumvent the necessity for needing a body.
3: Easy, don't have autonomous units. Why maintain wasteful human bodies when you don't need to?
4: Absolutely. Assuming that we advance BCI to the point where you can seamlessly transfer sensory information we could simply create units for which can be digitally accessed. So you could have 'bodies' any way you want. Disposable robots or partial biological constructs as shock troops.
My guess would be that the hivemind would know art is not the biomedic's area of expertise and they wouldn't care about it's experiences personally either (except maybe for the dozens of units linked directly to it) so suppress or ignore anything coming out of that cluster on this subject matter.
Depends ... does the experience of it have benefits to entertaining the hive? Or making the hive 'happier' and more effective?
The way I see it, all the best available knowledge would indeed be there within the hivemind, along with all the worthless crap, but the bandwidth of the individual human brains is still very limited and billions of queries of individual units to other units, cannot all have priority.
More efficient would be for a central leader cluster to be allowed to ask anyone (usually by the whole cluster) about anything and this way direct a stream of consciousness and then the for lesser clusters to have their limited bandwidth filled up only with a few channels, somewhat similar to educational TV channels and a few phone lines to their direct neighbors or cluster.
How do you see this?
Right, but the argument you could make is; "How is this different to what we have now? Or indeed, without hivemind or even with A.I., how would it be different?"
The hivemind allows all minds access to all information if they choose to weigh into the argument. So arguably you could simply have it designed by solely which brains people trust on a matter. Given there's no longer any real separation, to do anything else is akin to self-harm. We currently take on board ideas of the universe and the self on the basis of numerous interconnected ideas. The hivemind cuts through all that 'noise' to begin with.
The better question is, how do human brains as they are manage not to become idle or confused when confronted by the stream of different ideas it is confronted with? All without the benefit of not being able to directly tap into all information available on it?
I guess you could, but suppose you had a network of many IBM compatible 8088s and 80286s and also a couple modern core-i7 PCs? Maybe you'll keep one or two oldies around to play with sometimes for nostalgia, but you can disconnect and shut down all those old boxes without sacrificing any significant computational power, while saving a lot on the energy bill.
Those humans in the hivemind are already reduced to less than cogs in a machine and if AIs in that same machine are way more advanced, then the logical conclusion is to stop using the humans.
I don't see how, however. The human minds inducted into the hive aren't 'reduced' ... they're made better. No class, no poverty, no corruption. You have to take into account the hivemind is not merely about making people smarter. It's about making people
happier ... more
content ... more
productive. I also query that human minds are somehow lacking in comparison to some hypothetical A.I. For starters, only the hivemind knows what it's like to be in the hivemind. A.I. can benefit the existence of the hivemind, but the hivemind is the meaning of the hivemind.
Plus there are numerous reasons why the hivemind is superior to solely only A.I. systems, in that it is the greatest example of biological complexity. There's no point to a computer being made eternal.
Now have instead one or a few humans in the network and they will retain their uniqueness and may even serve a purpose as to be the squishy interface for the network to the physical world and to give the AIs purpose through their emotions. The human might then get to keep it's director's seat, or at least gets to stay on the party fun commission.
You can supplant this by simply having 'unutilised' human assets outside the hivemind, however. Basically cultivated humanity ... then the hivemind plucks a few minds every few decade to provide a buffet of (then) alien thoughts seperate to it. This is kind of why I made the joke before that even physics is on my side. We could dump humans on a terraformed planet 100 lya, develop safe .99 LS travel, travel back to a formerly colonised world, 12,000 years of social evolution for the colonising humans, 200 years for the hivemind. One 12,000 year planetary wide buffet every 100 years.
12,000 years of complex advances made, of history, of culture, every 100 years for us. All those delicious (Alien) minds to consume
You won't get that with an A.I.
Plus the hivemind provides
unity. A reason to make friendly contact with other hives. A reason to desire contact with other posthumanity once we become an interstellar empire. If one planet developed an A.I. designed to promote the best opportunities solely from a group of individuals ... that might determine that the best opportunities for a planet of individual humans facing the harvest of minds to
rebel or design countermeasures to ensure that the culture that birthed it can maintain their own idea of power ...
The hivemind would determine such individuality is
unnecessary and counterintuitive to the immensity of benefits of continued unity. Even if the A.I. is operating purely on a numbers game of what's best for the individual people that built it (or for its own benefit facing imminent destruction), the hivemind offers what's best for the hivemind irrespective of the individual goals of clueless human cattle colonised somewhere without knowing why they were colonised in the firstplace (basically for the hivemind to consume them after a set number of years).
This is particularly important as different hives harvest different collections of worlds all before the 100,000 year 'unification' of all hives at some central point in the cosmos.
Basically the hive offers peace and stability across many millenia of separation. Something that you won't get if A.I.s are tied to the moral, cultural, and social good of the individual cultures that birthed them.
Actually I was approaching the links from the idea of different competing networks, each one a cyborg of a closed network consisting of one (or maybe a few) human and many AIs. Networks may not always trust each other, but cooperation would still be the sensible thing to do in most cases. There might even exist entirely human networks in this scenario (if the balance of power happens to turn out that way). That's where the need for voluntary links and many protection measures would come from.
You could facilitate this with just having a number of hiveminds. If only for the sake of defence, having hiveminds in numerous places. There would be the impetus to regularly communicate (for the consumption of new ideas) ... but it would help to ground the hivemind network into meeting all material needs across the planet, without sacrificing interconnectedness and productivity.
The hivemind does solve the issue of trust, internally within that hive at least, so that is an advantage you have. Cyborgs no matter how intelligent, may still choose to harm each other, if the risk/reward is favorable.
Then again, suppose some humans in the hivemind may be sacrificed for the greater good, then some cyborgs ending on the scrapheap isn't the end of the world either.
A fairer type of competition between cyborgs doesn't have to be a bad thing though. Each may still approach a certain problem from a different angle in relative isolation, completing unconventional trains of thought to their conclusion, without getting shot down prematurely.
One problem the hivemind has to resolve, is how to avoid becoming an echo chamber. This might be possible by giving some clusters some level of independence, but maybe there are more interconnected solutions too.
Easy ... if anything, I think the hivemind would make, say, the
sciences LESS of an 'echo chamber' (In that no longer do you need to be only a small segment of the community to participate in the dialogue). For example, all minds suddenly comprehend the hugeness of the universe, and has direct connection to the thoughts of all scientists incorporated into the hive. So no longer do you get uneducated people screaming; 'Nerd!' If you can tap into the pleasure that a scienctist gets in completing a new theorem, or directly tap into the sense of satisfaction in working out the mysteries and applications of science ... then you might find that the right processing power (i.e, more brains) plied to the matter of achieving more 'pleasureable science'.
See, the great thing about the human brain is tat nearly all of them can be built up. It has plasticity built right into its make up. Evolution of thought happens as per exposure to the universe, not simply through coding its expression.
It all depends on my initial assumption of future AIs vastly outperforming humans at some point. Both hiveminds and cyborgs might just become futile attempts to keep humans around for something.
Well, that and there's one other thing that the cyborg has. It's nice to be the boss of a great outfit, even when your subordinates are much smarter than you.
But I don't see why you can't have both. For starters, why have cyborgs ala Deus Ex when you can have designer robotic and biological constructs you can just remotely interact with. Where by you can feel through it? Think about it ... the same technologies that would allow a Deux Ex style future, are the same technologies that allow us to simply command remote bodies without needing one of our own at all.
And the hivemind offers that existence just as well as any other, better in fact.
The hivemind is going to be a cybernetic one. It's just also going to be a biological constructed one as well.
I also doubt that A.I. can supplant something like an organic entity in many numerous fashions. Not only that, arguably it's easier to just build new organic brains into a neural network of ever expanding universes of thought as part of the hive. The human brain is still far more complex than compouters
now by the same size and same energy cost.
Not only that, just like the human brain, the hivemind is far less weaker to external damage or corruption of information than an A.I.
Its needs are fewer, also. For example ... a hive could keep itself alive just by processing organic matter and oxygen. It can also self-cannibalize, shut down metabolism in an emergency, reboot back up later (albeit with extensive memory loss as we see with numerous cases of people being brought back from death). Biological networks would operate better in numerous situations ... particularly in places which are devoid of conventional energy sources such as solar. Offer numerous biochemical options of the transmission of information, can already be directly keyed into human thought and sensation, doesn't require adaptive programming, is self-healing, and the problems can be quickly diagnosed.
The assumption of A.I. supplanting biological systems of information processing is simplifying way too much just what your central nervous system is capable of doing. There's a good argument in there that complex reasoning over the multitudinal facets of the human experience would require a computer that is ultimately far less efficient than merely a whole bunch of brains and spinal cords in jars, and adaptive biological vessels.
Sure you can get a supercomputer to beat a person in chess, you can't get a supercomputer to beat an complex animal being a complex animal. The whole 'sum of its parts' comes into play when you're talking about sapience. Plus the hivemind can simply make itself faster, smarter, better ... simply by adding new biological matter. So why would the hivemind want to do anything but incorporate both biological and bionic options?
It's not about getting humans to compete with machines, it's about getting humans to be more than human. It's not simply about getting the hvemind to just do what a computer says, it's about getting the hivemind to be able to consider why it should in the first place.