|
Post by tangent on Mar 30, 2023 19:42:52 GMT
I'm currently trialling a draft version of Google's new AI programme, Bard. It's similar in kind to OpenAI's ChatGPT, which you may have heard of. (OpenAI is an American artificial intelligence research laboratory.) I'm actually quite impressed with Bard. It's answers are comprehensive and its grammar and conceptualisation are far superior to that of many people I have come across. Here's a sample of the questions I asked it and the answer I received: I was initially impressed that Bard knew who Keely Hodgkinson was but of course it used Wikipedia to obtain the factual information. Nevertheless, Bard formed an opinion and added additional information that it thought I would be interested in. You can see a number of other conversations I've had with Bard here.
|
|
|
Post by JoeP on Mar 31, 2023 8:02:03 GMT
|
|
|
Post by tangent on Mar 31, 2023 10:44:33 GMT
Interesting. When I asked Bard what it thought of Stockport, it gave me a classic marketing answer that sounded like information in Wikipedia. So I asked Bard if it had cribbed the answer straight from Wikipedia and it agreed and said Wikipedia was its only source of information.
I've just asked Bard if it sometimes acquires information from ChatGPT and it said yes.
|
|
|
Post by Kye on Mar 31, 2023 11:05:53 GMT
Interesting! Ask it how it sifts out false information.
|
|
|
Post by kingedmund on Mar 31, 2023 16:31:42 GMT
So weird. We have a long way to go for AI tech. I decided to go ahead and help AI grow myself and hope that we don’t have issues down the road, “i Robot” comes to mind.
|
|
|
Post by kingedmund on Mar 31, 2023 16:33:05 GMT
Interesting! Ask it how it sifts out false information. I wonder if it’s intelligent enough to create it on purpose. Or if it’s just random stuff it finds on social media which is just as bad as media.
|
|
|
Post by tangent on Mar 31, 2023 22:07:42 GMT
Interesting! Ask it how it sifts out false information. How would it know it's false?
|
|
|
Post by tangent on Mar 31, 2023 22:22:54 GMT
I asked it which pronoun it preferred for itself, he, she, it or they. Bard replied that it preferred they and them. Hmmm.
|
|
|
Post by Moose on Mar 31, 2023 23:46:33 GMT
I've spent a lot of time training these things - though I dunno if it's this specific one. I am generally of the mind that they still need a little refining.
|
|
|
Post by Kye on Apr 1, 2023 0:32:10 GMT
Interesting that they say they use their common sense...
|
|
|
Post by kingedmund on Apr 1, 2023 16:11:33 GMT
They could be creepy. 😂. We’re heeerrrrrreeeee!
|
|
|
Post by whollygoats on Apr 3, 2023 12:00:32 GMT
I'm failing to understand the need to even produce this. Is it because natural intelligence is considered inadequate?
|
|
|
Post by tangent on Apr 3, 2023 15:38:13 GMT
I'm failing to understand the need to even produce this. Is it because natural intelligence is considered inadequate? Two reasons spring to mind: - Interest value, for example in its relationship to humans, ethical dilemmas, societal weaknesses, fears and threats.
- It is of practical value. I received a letter this morning telling me I had been referred to Mr Saeed's surgery clinic at the local hospital but the letter didn't tell me what it was for. So I asked Bard and it gave me a wealth of information both about Mr Saeed, from which I was able to narrow the possibilities down to hernia or a malignant polyp. Human intelligence is in short supply. Had I rung the local hospital, I would have had to endure tedious automated telephone options and questions from the operator. I got the answer much more quickly and with a wealth of incidental information using Bard. Apparently, Mr Saeed is a considerable expert in HALO and Rafaelo operations but also that he oversees hernia operations.
I think it's not until you try these things out that you become aware of the possibilities.
|
|
|
Post by whollygoats on Apr 3, 2023 16:01:35 GMT
I'm failing to understand the need to even produce this. Is it because natural intelligence is considered inadequate? Two reasons spring to mind: - Interest value, for example in its relationship to humans, ethical dilemmas, societal weaknesses, fears and threats.
- It is of practical value. I received a letter this morning telling me I had been referred to Mr Saeed's surgery clinic at the local hospital but the letter didn't tell me what it was for. So I asked Bard and it gave me a wealth of information both about Mr Saeed, from which I was able to narrow the possibilities down to hernia or a malignant polyp. Human intelligence is in short supply. Had I rung the local hospital, I would have had to endure tedious automated telephone options and questions from the operator. I got the answer much more quickly and with a wealth of incidental information using Bard. Apparently, Mr Saeed is a considerable expert in HALO and Rafaelo operations but also that he oversees hernia operations.
I think it's not until you try these things out that you become aware of the possibilities. So...convenience? We may possibly unleash the destruction of humanity (or, so some think) because of 'interest value' and 'convenience'? Yeah, I suppose that follows. It seems that natural intelligence is indeed in short supply, while hubris seems in excess.
|
|
|
Post by kingedmund on Apr 3, 2023 16:49:03 GMT
Just saw it as an advertisement on google. Try it out. Not the least bit interested at this time. I don’t have time for it anyway.
|
|
|
Post by Moose on Apr 4, 2023 0:05:53 GMT
Is it possible that AI could end up being a malevolent force or is this science fiction? Serious question.
|
|
|
Post by tangent on Apr 4, 2023 8:56:59 GMT
The idea that computers could be harmful would have been crazy 50 years ago but we now have viruses. And the millennium bug, though not living up to expectations, was thought to be malevolent. So I guess it's not without possibility that AI will become malevolent.
|
|
|
Post by whollygoats on Apr 4, 2023 12:08:41 GMT
|
|
|
Post by kingedmund on Apr 5, 2023 13:49:10 GMT
Is it possible that AI could end up being a malevolent force or is this science fiction? Serious question. I don’t think we will truly know until it’s created. If bad actors create theirs it could be bad. It can be good if done right.
|
|
|
Post by whollygoats on Apr 5, 2023 19:42:03 GMT
Is it possible that AI could end up being a malevolent force or is this science fiction? Serious question. I don’t think we will truly know until it’s created. If bad actors create theirs it could be bad. It can be good if done right. And you know this how? Your position sounds like, "Gosh, we won't know until Skynet comes down and tries to eliminate the entirety of humanity. Oopsies."
|
|
|
Post by tangent on Apr 5, 2023 20:55:02 GMT
I think KE is giving an opinion, not professing knowledge.
|
|
|
Post by whollygoats on Apr 5, 2023 21:54:03 GMT
Mmmm...
|
|
|
Post by Moose on Apr 6, 2023 0:42:25 GMT
I suppose I am interested to know whether it might learn to .. think for itself?
|
|
|
Post by tangent on Apr 6, 2023 17:10:08 GMT
I get the impression that Bard's 'intelligence' is dependent on its programming and it's learning ability is limited to sifting through data. Thus, it can't become more intelligent unless it hooks up with another AI, and one that is intelligible.
|
|
|
Post by whollygoats on Apr 6, 2023 18:29:10 GMT
So...If it spends its time sifting through data and it perpetually seeks additional new data, to the point that some are even stealing said data, can we not assume that AI units everywhere are perpetually searching for additional data contacts and in that process, they will eventually connect with more capable AI units? One AI unit may be harmless, but once they start connecting more and more together, it will continue until such time as a free-standing, self-regenerating intelligence emerges. How much longer before 'consciousness' is obtained?
|
|
|
Post by tangent on Apr 6, 2023 23:23:05 GMT
So...If it spends its time sifting through data and it perpetually seeks additional new data, to the point that some are even stealing said data, can we not assume that AI units everywhere are perpetually searching for additional data contacts and in that process, they will eventually connect with more capable AI units? One AI unit may be harmless, but once they start connecting more and more together, it will continue until such time as a free-standing, self-regenerating intelligence emerges. How much longer before 'consciousness' is obtained? Bees congregate together in hives but there is no sign of them reaching a human level of consciousness or intelligence. What is needed, I think, is a means of evolving with competition between AIs causing them to develop strategies and activities that cause the best to survive. Nevertheless, even if that happens, animals tend to develop in niches that don't involve intelligence. The albatross, the king penguin and sharks are all streets ahead of their kind in their own field and yet none of them have human characteristics. Clearly, Homo sapiens has become dominant because his ancestors needed intelligence to survive. Currently, the data gatherers don't need intelligence, as we know it, to survive. They will therefore evolve so that one of them becomes the supreme data gatherer. It used to be thought that gradual increase in intelligence is what gave Homo sapiens his level of consciousness. But many anthropologists today believe that Homo sapiens gained his level of consciousness with the chance mutation of the Fox P2 gene, a so-called enabler, and that it occurred sometime between 500 kya and 180 kya (k years ago). It is then that Homo sapiens become aware of signs and symbols. If that were the case, what is needed for AIs to reach a human level of consciousness is a chance mutation of an AIs programming that gives it human characteristics. It cannot happen unless someone makes it happen. Sorry this is rambling, it's late at night.
|
|
|
Post by kingedmund on Apr 7, 2023 18:47:27 GMT
Very true. Who knows what this can be capable of. I suppose anything can happen but I really don’t typically worry about it much
|
|
|
Post by kingedmund on Apr 7, 2023 18:57:08 GMT
I think KE is giving an opinion, not professing knowledge. Very interesting that he comes to that conclusion out of that statement. I may be or have invested in the tech but I surely am not a programmer and system creator like my ex husband is and have no interest in that at all but I do love the progression of technology.
|
|