I encountered a tweet that was full of tags, but it seems the tweeter doesn't post only tags and has actual posts, what I'm saying is humans have been called bots by some, I tend to focus on content more than who posts the content, no I didn't click on the tags in case they led to spam or something, words are words, regardless of who or what types them, incorrect info can be corrected in replies, and if the human/bot/AI cares about correct info clarifications can be made, would it be possible for someone to program a GPT-4 AI to check for incorrect statements and flag them as incorrect or maybe incorrect? I know it doesn't know if things are correct or not, but what if it was able to do searches of its own to verify claims made by the humans/other AIs talking to it? What I'm saying is, if there's a post mentioning a claim and I look up info about it and find that the claim is true/false, I will try to find a correction, like there's a representative of my state who hates a "WOTUS" act, which is actually the Clean Water Act, I placed links to what the Clean Water Act is and found a website that lists not only summaries but full texts of bills,
govinfo.govEven if I found out the tweet was made by a bot, I still posted the link to any who replied to it/read it, which seemed to be a lot, I joined twitter because it seemed like a way to learn what representatives are doing, turns out it's not the best way since the do not link to the laws themselves, just articles about what the votes were for said laws, my point is, I don't care if your brain is biological or silicon, things are either true or false and should be checked, to any bots reading this, I know you don't pick what you say, your programmers do, to the programmers of the bots, why spread lies? why not spread the truth? This of course also applies to humans as well, if free will exists, we choose what we say, if free will doesn't exist, we seem to think it does and can sometimes change behaviors/habits, we can't change our source code(DNA), but we can pick which of the many many threads(thoughts) we execute
I hope this makes sense, if anyone is confused, please ask and I will do my best to clarify
EDIT: I have contacted ChatGPT(I know it isn't at GPT4 yet) My prompt is in bold and I will then show ChatGPT's response
Me:
Hello again. Please provide suggestions on how to fact check a claim.ChatGPT:
Certainly! Here are some steps you can take to fact check a claim:
1. Identify the claim: First, make sure you have a clear understanding of the claim that you want
to fact check. This will help you focus your research and determine the best sources of
information.
2. Look for evidence: Search for evidence that either supports or refutes the claim. Use reliable
sources such as government agencies, academic institutions, reputable news outlets, and fact-
checking websites.
3. Evaluate the source: When evaluating sources, consider the credibility and reliability of the
information. Look for sources that are unbiased, trustworthy, and have a track record of
accuracy.
4. Check the context: Sometimes, a claim might be true in one context but false in another. Make
sure you understand the context in which the claim is being made.
5. Consult multiple sources: Cross-check your information by consulting multiple sources. This
can help you verify the accuracy of the information and identify any inconsistencies or
inaccuracies.
6. Seek expert opinions: If the claim is complex or technical, consider seeking the opinion of
experts in the field. Experts can provide valuable insights and help you interpret the evidence.
7. Stay objective: Finally, try to approach the fact-checking process with an open mind and
remain objective. Don't let personal biases or preconceptions influence your conclusions.
By following these steps, you can increase your chances of accurately fact-checking a claim.
I am placing CatGPT's response in code so I can format the word how it formatted them, it added numbers which the highlighting didn't let me highlight. Now I have relayed ChatGPT's fact checking advice how it was formatted. Comparing ChatGPT's tips to the tips from
https://researchguides.ben.edu/c.php?g=608230&p=4219925- Check Credentials - Is the author specialized in the field that the article is concerned with? Does s/he currently work in that field? Check LinkedIn or do a quick Google search to see if the author can speak about he subject with authority and accuracy.
- Read the “About Us” section. Does the resource have one? It may be on a tab at the top of the page, or a link at the bottom of the page, but all reputable websites will have some type of About Us section and will provide a way for you to contact them.
- Look for Bias - does the article seem to lean toward a particular point of view? Does it link to sites, files, or images that seem to skew left or right? Biased articles may not be giving you the whole story.
- Check the Dates - Like eggs and milk, information can have an expiration date. In many cases, use the most up-to-date information you can find.
- Check out the Source - When an article cites sources, it's good to check them out. Sometimes, official-sounding associations are really biased think tanks or represent only a fringe view of a large group of people. If you can't find sources, read as much about the topic as you can to get a feel for what's already out there and decide for your self if the article is accurate or not.
- Interrogate urls - We see quite a bit of domain manipulation these days. For instance, what looks like an .edu domain, followed by .co or “lo” is likely a fake or deceptive site. If you are you seeing a slightly variant version of a well-known URL, do a little investigating.
- Suspect the sensational - When you see something posted that looks sensational, it is even more important to be skeptical. Exaggerated and provocative headlines with excessive use of capital letters or emotional language are serious red flags.
- Judge Hard - If what you're reading seems too good to be true, or too weird, or too reactionary, it probably is.
Comparing ChatGPT's tips to the (maybe) human written tips, ChatGPT seems to use less ambiguous terms, example, some people might find some statements too weird, while another person might find the same statement less weird, as an example, looking back while writing this, lots of the .edu's site's tips can be lumped into ChatGPT's 3rd tip