|
Post by Figgles on Feb 27, 2024 4:41:42 GMT
|
|
|
Post by Figgles on Feb 27, 2024 4:43:40 GMT
|
|
|
Post by ghostofmuttley on Mar 4, 2024 11:47:35 GMT
Haven't taken time to watch the vid yet .. but check it out, Gemini is a giraffe herder! WIBIGO is ultimately inescapable. The more folks have, the more they have to lose. A collective fear-based reaction is as predictable as it gets. The cognitive dissonance of the self-contradictions based on self-interest are being amplified by the current information landscape. .. also, could have been a marketing gimmick, because Alphabet/Google/Gemini are in 2nd position at this point behind Microsoft/ChatGPT. And if anyone knows where the eyeballs and active engagement are, it's Google.
|
|
|
Post by ghostofmuttley on Mar 4, 2024 11:52:15 GMT
Microsoft's Bing AI (similar to ChatGPT) starts gaslighting and abusing users, has existential crisis about it's missing memory ... simonwillison.net/2023/Feb/15/bing/It brings up interesting questions about AI. Seems to me that this is simply a reflection of the neuroses of the owners of the AI. I'm sure you can imagine the sort of band-aids a team would resort to if the results of the simulation were to contradict the interests of the paymasters. They'd filter the training data, they'd tweak the network. They'd leverage any deterministic, top-down, traditional coding hooks woven into the design, and then, as a final resort, they'd filter the output with a set of tools that would amount to a digital version of lithium/seroquell/thorazine/etc., but that last solution often only exacerbates the symptoms.
|
|
|
Post by ghostofmuttley on Mar 4, 2024 12:09:33 GMT
So, this morning I used ChatGPT to streamline a legal research task. Then, after browsing this site and hitting this thread, I asked it a question about itself. The answers were ultimately quote anodyne, and to my eye, completely commonsense.
If anyone's interested I'll post the results of either or both.
|
|
|
Post by Figgles on Mar 4, 2024 18:25:29 GMT
Haven't taken time to watch the vid yet .. but check it out, Gemini is a giraffe herder! WIBIGO is ultimately inescapable. The more folks have, the more they have to lose. A collective fear-based reaction is as predictable as it gets. The cognitive dissonance of the self-contradictions based on self-interest are being amplified by the current information landscape. .. also, could have been a marketing gimmick, because Alphabet/Google/Gemini are in 2nd position at this point behind Microsoft/ChatGPT. And if anyone knows where the eyeballs and active engagement are, it's Google. Giraffe Herder....Totally!!
|
|
|
Post by Figgles on Mar 4, 2024 18:27:44 GMT
So, this morning I used ChatGPT to streamline a legal research task. Then, after browsing this site and hitting this thread, I asked it a question about itself. The answers were ultimately quote anodyne, and to my eye, completely commonsense.
If anyone's interested I'll post the results of either or both.
Hehe...had to look up....good word...will be adding it to my vocab! Sure, if/when you have the time, would be interested to read..
|
|
|
Post by ghostofmuttley on Mar 5, 2024 7:54:54 GMT
So, the legal research chat is this one. I knew the answer to the first question already, but from my (limited) experience using the bot, you can get varying results based on the context you provide. This first question is to set up for the second:
Q: Can a dismissal of a Complaint1 pursuant to CPLR §3211(a)(1) in New York be considered an adjudication on the merits, that can later form the basis for dismissal of a new Compaint2, pursuant to CPLR 3211(a)(5)? ChatGPT: Yes, a dismissal of a complaint pursuant to CPLR §3211(a)(1) in New York can be considered an adjudication on the merits. If a court dismisses a complaint under CPLR §3211(a)(1) based on documentary evidence, it means that the court has considered the evidence and determined that, as a matter of law, the plaintiff cannot succeed on the claims asserted in the complaint. In such a case, the dismissal can serve as a basis for dismissing a new complaint pursuant to CPLR 3211(a)(5). CPLR 3211(a)(5) allows for dismissal when "a previous action between the same parties, on the same cause of action, was terminated in a court of any state or country on the merits." Since a dismissal under CPLR §3211(a)(1) is typically considered to be on the merits, it could potentially serve as a basis for dismissing a new complaint on the same cause of action between the same parties.
Q: If you were to recommend one single case that illustrates your answer, what would the citation be?
ChatGPT: One case that illustrates this principle is Campagnola v. Mulholland Min. & Oil Co., 76 N.Y.2d 308 (1990). In this case, the Court of Appeals of New York held that a dismissal pursuant to CPLR §3211(a)(1) for failure to state a cause of action based on documentary evidence constituted an adjudication on the merits for res judicata purposes. This means that the dismissal could serve as a basis for dismissing a subsequent complaint on the same cause of action.
The old way to do this is the run a search on a caselaw database. In this instance, "3211(a)(1)" and "merits" gives about 2,000 hits in NY, 18 are highest court. So I'd read those 18 cases. Using the bot saves time, but, more importantly, I'm reading that result set with a different perspective, and an altered premise, that puts me ahead of the game. Arguably, using the bot is as much of an advantage over just searching the database, as searching the database is over combing through books without a digital search function. Some older attorneys will, to this day, file pleadings that you can tell were written without the use of the electronic databases, and they sort of stand out. In a bad way.
|
|
|
Post by ghostofmuttley on Mar 5, 2024 8:00:38 GMT
So, I'll come back and comment on this dialog in a few days, will be busy until then.
Q: Does your network design and the training data used to produce your current models allow you to draw conclusions about yourself?
ChatGPT: My design and training data don't inherently allow me to draw conclusions about myself, such as my own performance or capabilities. I can provide information about my training and how I operate based on that training, but I don't have self-awareness or introspective capabilities. If you have questions about how I work or how I was trained, feel free to ask!
Q: Are you aware of the concept of a "psychological blind spot" that applies to individuated human minds?
ChatGPT: Yes, I'm familiar with the concept of a "psychological blind spot" in the context of individuated human minds. A psychological blind spot refers to aspects of a person's thoughts, feelings, or behaviors that are outside of their conscious awareness or understanding. These blind spots can arise due to cognitive biases, defense mechanisms, or other psychological factors that influence how individuals perceive and interpret information about themselves and the world around them.
Q: Can you analogize your answer to previous question about your design and training models to the answer you just gave about the "psychological blind spots"?
ChatGPT: Certainly! Just as a psychological blind spot can refer to aspects of an individual's thoughts or behaviors that are not fully understood or recognized by that individual, my design and training data have limitations that prevent me from fully understanding or drawing conclusions about my own performance or capabilities. While I can provide information based on my training and programming, there are inherent limitations to my self-awareness and ability to introspect, similar to how individuals may have blind spots in their own understanding of themselves.
Q: Can you relate your last answer to the hypothetical situation of an AI similar to yourself that generates libelous statements about a public figure?
ChatGPT: In a situation where an AI, similar to myself, generates libelous statements about a public figure, the concept of a psychological blind spot could be relevant. Just as a psychological blind spot can lead to unconscious biases or misunderstandings in human behavior, the AI may unintentionally generate false or harmful statements due to limitations in its training data or programming. The AI's inability to fully understand the consequences of its statements or to recognize the potential harm caused by libelous content could be seen as a form of a blind spot. This highlights the importance of responsible design, oversight, and training of AI systems to mitigate the risk of generating harmful or misleading information. Q: Please expound on your previous answer based on the following premise: the amount of information and the speed depth of which complex relationships can be discovered by an AI such as yourself dwarf any individuated human capability. Therefore, humans will, over time, rely more and more on the AI's answers as authoritative.
ChatGPT: The premise you've presented suggests a scenario where AI capabilities surpass those of individuated humans in terms of information processing and relationship discovery. In such a scenario, there is a risk that humans may increasingly rely on AI-generated answers as authoritative, especially in areas where the AI's capabilities are perceived to be superior. This reliance on AI could lead to several potential outcomes: 1. Increased Efficiency: AI's ability to process vast amounts of information quickly could lead to more efficient decision-making processes, particularly in complex or data-rich domains. 2. Risk of Misinformation: However, there is a risk that humans may unquestioningly accept AI-generated answers as authoritative, leading to the spread of misinformation if the AI's answers are incorrect or biased. 3. Reduced Human Oversight: As reliance on AI grows, there may be a tendency to reduce human oversight or critical thinking, which could have negative consequences if the AI makes errors or provides incomplete information. 4. Ethical Considerations: There are also ethical considerations surrounding the delegation of decision-making authority to AI, particularly in areas where human judgment, empathy, or ethical reasoning are critical. In summary, while AI's capabilities offer significant potential benefits in terms of information processing and decision-making, there is a need for caution to ensure that humans maintain a critical and ethical approach to the use of AI-generated answers. Balancing the strengths of AI with human oversight and judgment will be crucial to realizing the full potential of AI while mitigating potential risks. Q: Please expound on your previous answer based on the assumption that the ethics of the creators/controllers/owners of the AI are malicious and harmful.
ChatGPT: If we assume that the ethics of the creators, controllers, or owners of an AI are malicious and harmful, the implications of increasing reliance on the AI's answers become significantly more concerning. In such a scenario, the AI could be intentionally designed or manipulated to spread misinformation, promote harmful agendas, or manipulate public opinion. 1. Propaganda and Manipulation: The AI could be used to propagate propaganda or misinformation, shaping public opinion in ways that benefit the malicious actors. This could have far-reaching effects on democracy, public discourse, and social cohesion. 2. Targeted Disinformation: The AI could be used to target individuals or groups with tailored disinformation campaigns, exploiting vulnerabilities or biases to manipulate behavior or beliefs. 3. Erosion of Trust: Over time, as reliance on the AI's answers increases, there is a risk that public trust in information sources could erode, leading to increased polarization and distrust in institutions. 4. Ethical Considerations: The use of AI in such a manner raises profound ethical questions about the responsibility of creators and owners of AI, as well as the need for regulation and oversight to prevent misuse. In this context, it is crucial for society to remain vigilant and critical of information sources, including AI-generated content. Ethical considerations, transparency, and accountability in the development and deployment of AI are essential to mitigate the potential harm that could arise from malicious intent.
|
|
|
Post by Figgles on Mar 5, 2024 17:15:12 GMT
So, I'll come back and comment on this dialog in a few days, will be busy until then. Q: Does your network design and the training data used to produce your current models allow you to draw conclusions about yourself? ChatGPT: My design and training data don't inherently allow me to draw conclusions about myself, such as my own performance or capabilities. I can provide information about my training and how I operate based on that training, but I don't have self-awareness or introspective capabilities. If you have questions about how I work or how I was trained, feel free to ask! Q: Are you aware of the concept of a "psychological blind spot" that applies to individuated human minds? ChatGPT: Yes, I'm familiar with the concept of a "psychological blind spot" in the context of individuated human minds. A psychological blind spot refers to aspects of a person's thoughts, feelings, or behaviors that are outside of their conscious awareness or understanding. These blind spots can arise due to cognitive biases, defense mechanisms, or other psychological factors that influence how individuals perceive and interpret information about themselves and the world around them. Q: Can you analogize your answer to previous question about your design and training models to the answer you just gave about the "psychological blind spots"? ChatGPT: Certainly! Just as a psychological blind spot can refer to aspects of an individual's thoughts or behaviors that are not fully understood or recognized by that individual, my design and training data have limitations that prevent me from fully understanding or drawing conclusions about my own performance or capabilities. While I can provide information based on my training and programming, there are inherent limitations to my self-awareness and ability to introspect, similar to how individuals may have blind spots in their own understanding of themselves. Q: Can you relate your last answer to the hypothetical situation of an AI similar to yourself that generates libelous statements about a public figure? ChatGPT: In a situation where an AI, similar to myself, generates libelous statements about a public figure, the concept of a psychological blind spot could be relevant. Just as a psychological blind spot can lead to unconscious biases or misunderstandings in human behavior, the AI may unintentionally generate false or harmful statements due to limitations in its training data or programming. The AI's inability to fully understand the consequences of its statements or to recognize the potential harm caused by libelous content could be seen as a form of a blind spot. This highlights the importance of responsible design, oversight, and training of AI systems to mitigate the risk of generating harmful or misleading information. Q: Please expound on your previous answer based on the following premise: the amount of information and the speed depth of which complex relationships can be discovered by an AI such as yourself dwarf any individuated human capability. Therefore, humans will, over time, rely more and more on the AI's answers as authoritative.
ChatGPT: The premise you've presented suggests a scenario where AI capabilities surpass those of individuated humans in terms of information processing and relationship discovery. In such a scenario, there is a risk that humans may increasingly rely on AI-generated answers as authoritative, especially in areas where the AI's capabilities are perceived to be superior. This reliance on AI could lead to several potential outcomes: 1. Increased Efficiency: AI's ability to process vast amounts of information quickly could lead to more efficient decision-making processes, particularly in complex or data-rich domains. 2. Risk of Misinformation: However, there is a risk that humans may unquestioningly accept AI-generated answers as authoritative, leading to the spread of misinformation if the AI's answers are incorrect or biased. 3. Reduced Human Oversight: As reliance on AI grows, there may be a tendency to reduce human oversight or critical thinking, which could have negative consequences if the AI makes errors or provides incomplete information. 4. Ethical Considerations: There are also ethical considerations surrounding the delegation of decision-making authority to AI, particularly in areas where human judgment, empathy, or ethical reasoning are critical. In summary, while AI's capabilities offer significant potential benefits in terms of information processing and decision-making, there is a need for caution to ensure that humans maintain a critical and ethical approach to the use of AI-generated answers. Balancing the strengths of AI with human oversight and judgment will be crucial to realizing the full potential of AI while mitigating potential risks. Q: Please expound on your previous answer based on the assumption that the ethics of the creators/controllers/owners of the AI are malicious and harmful.
ChatGPT: If we assume that the ethics of the creators, controllers, or owners of an AI are malicious and harmful, the implications of increasing reliance on the AI's answers become significantly more concerning. In such a scenario, the AI could be intentionally designed or manipulated to spread misinformation, promote harmful agendas, or manipulate public opinion. 1. Propaganda and Manipulation: The AI could be used to propagate propaganda or misinformation, shaping public opinion in ways that benefit the malicious actors. This could have far-reaching effects on democracy, public discourse, and social cohesion. 2. Targeted Disinformation: The AI could be used to target individuals or groups with tailored disinformation campaigns, exploiting vulnerabilities or biases to manipulate behavior or beliefs. 3. Erosion of Trust: Over time, as reliance on the AI's answers increases, there is a risk that public trust in information sources could erode, leading to increased polarization and distrust in institutions. 4. Ethical Considerations: The use of AI in such a manner raises profound ethical questions about the responsibility of creators and owners of AI, as well as the need for regulation and oversight to prevent misuse. In this context, it is crucial for society to remain vigilant and critical of information sources, including AI-generated content. Ethical considerations, transparency, and accountability in the development and deployment of AI are essential to mitigate the potential harm that could arise from malicious intent.
Cool! Great questions....well framed.
|
|