One constant, yet valid, argument against AI is how it makes things up. It's so confident in it's lies that we believe it. This can be a huge issue that could cost you your credibility.
So how do I deal with this daily? I don't listen to AI. I don't read what it tells me. Instead, I skim the outputs and look for things that I don't know. Afterwards, I look those things up and find experts to learn from.
Essentially it goes like this:
- Ask AI a question.
- Scan the outputs for new words, phrases, and information.
- Google the new information and find human-led articles, studies, and videos related to it.
Research should always be done manually to confirm that it's correct. You could theoretically speed things up by asking another AI to verify outputs, but you risk more hallucinations.
So how exactly does this help speed things up if you still need to work manually?
One of the most time-consuming parts of researching is figuring out exactly what it is you need to look for. AI brings that all together by searching both its training and the web for main ideas and presenting them in an easy-to-read format.
So basically AI is for surface-level research: e.g. What main points do I need to know in order to move forward and really understand?
Always verify anything AI tells you, it's far too confident in being wrong.
What is your workflow when doing research with AI? Do you trust it? Or do you treat it with skepticism?