Shauli Zacks Asks Mark About Alexi’s History and Safety Measures, and About the Future of AI
What led to the creation of Alexi, the AI-powered legal research software? What can we do to build trust in legal domain specific AIs? How does Alexi address privacy and security? What might the future of legal AI tools look like?Shauli Zacks explored these question and more with Alexi CEO Mark Doble in his recent article: Interview with Mark Doble - CEO at Alexi.
In the interview, Doble explains why Large Language Models (LLMs) such GPT4 are not necessarily suitable for research work: “First and foremost we need to recognize that LLMs are very good at language and text-based tasks, and currently that’s all they should be used for. They are too unreliable as a source of knowledge.”
So what is the right way to build an AI-powered tool that can be relied on for technical work such as in-depth legal research? “he domain-specific AI stack that meets industry-grade requirements will increasingly resemble the human brain. Just like human brains have regions for language, then many other regions for high-order cognitive processes, so too does the AI tech stack.
The interviewer also asked about Alexi’s privacy and safety protocols: “What measures are in place to address privacy and security concerns when dealing with sensitive legal information in AI-powered solutions?” To which, Doble answered in part: “We do not send any confidential customer data to LLMs hosted on third party APIs…This helps to avoid training any confidential information into models that have limited ability to control the output, [and] helps us focus on building the utility that we are looking for in our products.”
For the full answer and interview, read the original post on SafetyDetectives: Interview with Mark Doble - CEO at Alexi.