Lower risk by checking these 2 items before picking any AI tool
...to prioritize privacy, security and consistency
New AI tools are launching daily. Each one seems more promising than its predecessor which launched …just weeks ago! It is exciting and overwhelming at the same time.
The feeling of overwhelm is highest after you recognize that the reasons for using a tool to take over your time-consuming tasks outweigh the reasons against (what were they again?!); and now you need to actually pick a tool from an array of options.
You should definitely compare and contrast the pros and cons of a few different AI tools that address the same use case before selecting one.
To take this step, you could ask a friendly chatbot like ChatGPT or Claude to list their various features, as we wrote about some weeks ago in this newsletter (see Q2. Which AI tools should I use?) from a more tactical and practical perspective.
But, you should also evaluate two more macro points, namely continuity and security that are independent of how you use the product that we covered in the afore mentioned article.
Let’s dig into each one.
Continuity
Many of the flashiest AI tools are created by startups that rise quickly but might also fail fast leaving you struggling to find an alternative. Technology startups, such as AI startups, can fail even faster if they aren’t able to get enough customers in that time to stay afloat and prosper.
Unfortunately, their failure rate is too high to ignore if you are a potential customer getting ready to rely on the tool for achieving personal or business goals in the medium term.
This is not to say that you should not become early adopters of new tools. In fact, that is one of the best ways to learn quickly and get ahead of the curve. So, I urge you to be adventurous and explore because it is an unprecedented time in technology in some ways.
But, if you value consistency, the newest tools on the block might not be right for you without thorough due diligence (or reading this newsletter so that I can do the diligence for you - wink! wink! Shameless self-promotion here!).
So, how do you know which tools will have continuity?
You don’t! Unlike most products where user reviews and testimonials can help, AI tool reviews generally focus on the use of the tool itself and not their parent company’s longevity.
While there are no guarantees in life or business, there are some ways that you can indirectly estimate whether the tool will face extinction in the next year or two.
You might strongly consider tools that have
been tried and tested by virtue of having existed for a period of time already (this is tricky because most of the LLM-based AI startups, whose tools are predominantly the ones we are discussing here, are < 2 years old); or
accumulated a large user base (e.g. Gamma AI with > 50 million users in spite of its toddler age) and/or
have acquired a list of coveted business (enterprise) customers if the tool is also available to enterprises.
Business or enterprise customers, especially the larger ones, tend to be more conservative and risk-averse about which tools they sign up for. They are also known to run trials and conduct a series of checks and balances before embracing new tools. So, you can also expect lower risk to adopting that tool if reputable enterprises have, especially if there is no other way for you to confirm that.
Most AI startups will list well-known enterprise users on their websites. Make sure that the enterprises are listed as clients or customers and not partners. Partners just means that they have some kind of business relationship but they might not actually be using the tools.
Security
The success of any AI tool and your success in using that tool is predicated on information that you exchange with it.
This information could include personal data, private document uploads, project code or anything else that, if accessible and misused by anyone unauthorized, could violate your privacy and expose you or your business to significant risks.
When selecting an AI tool, make sure that two of the first things you check are
Whether it has secure protocols in place so that no one else inside or outside the company can access your data; and
How it will handle any information you give it (i.e. save it, allow you to erase it and use it for training its models)
To explain these points in more detail, let’s take the example of Otter AI, a meeting note-taker tool that also transcribes and summarizes your voice content and creates action items. (p.s. I will cover Otter AI in an upcoming issue of the ‘Meet AI’ section of this newsletter. Stay tuned).
Otter AI seems to have addressed the security issues with transparency and reasonable security measures in place (at least in my opinion - no, I don’t have any relationship with the company at this time).
Internal security measures
Internal security refers to whether any employee or partner of the company can get access to a user’s privileged information.
Otter AI’s privacy and security policy found here shares how a user can manage the settings to control who has access to your information and how/when your data is deleted.
They de-identify and encrypt user data before using it to train their models. While this point may reassure some folks that they cannot be identified and the conversation cannot be traced back to them, it could cause some discomfort to others.
Something to keep in mind is that most AI tools, especially those built for a specialized purpose like Otter, will most likely use user data to train their models. It is one of the ways in which they are able to improve their product. As such, there is no way to avoid it as a user. The best we can do is ensure that they are putting suitable safety measures in place such as encrypting data and removing any identifiable information.
Whether any of their measures give you the level of security you need to confidently use the tool will vary from person to person. Nevertheless, it is important that you review the policies carefully before making any sign-up decisions.
External security measures
External security refers to protections put in place to prevent outsiders, i.e. hackers from accessing user data.
Otter AI provides details on their cybersecurity protections including how they protect their employees’ computers and data stored on Amazon’s secure AWS S3 servers. They also share that they have earned the SOC2 certification which is one of the well-known and respected standards for data security, confidentiality and privacy.
As a lay person, it is a challenge to find out what you should be looking for in terms of data security certification. Even security experts sometimes need a manual.
But that’s ok.
The goal here, for a user, should be to learn about whether the company is paying enough attention to the security angle and to an extent that makes you feel sufficiently secure.
Even if you don’t know what each certification means, verify that the company is being transparent and providing enough details to assuage your concerns. Also, feel free to reach out to their sales team for more answers if you want more details.
Happy AI tooling, friends!
Please share this article with anyone who should know more about this as they utilize AI tools for work or play.
Great points on continuity and security. It's so easy to get caught up in the shiny new object syndrome with AI tools, but those macro considerations are crucial for long-term success.
Most AI tools right now are pretty new, so enterprise traction is limited, but it's coming. We (Solid) work with some pretty large organizations who are entrusting us with their metadata. Some took 6 months just to go through the full security review, and I think they're right to do so.