<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=334210687100723&amp;ev=PageView&amp;noscript=1">
hero bg
hero bg

A.I. Implementation: The 70/30 Rule For Success

We will cut to the chase: some A.I. platforms are great, and some stink. 

Some are great at specific tasks, and others are terrible at that same task. 

We hear about artificial intelligence in the news like it is one “thing,” but it’s critical to understand that there are dozens of different A.I. platforms, each with their strengths and weaknesses. For example, an A.I. can be terrible at writing blog posts but excel at identifying concentration risk within an investment portfolio. Because of this, trying different A.I.s out and testing their output against one another for the specific task you want help with is essential.

These platforms differ from one another because they run on different code and, critically, they have been trained on different source materials. If the A.I. you are using is trained on terrible writing, you can almost guarantee the output will be terrible writing — because bad writing is all the A.I. “knows.”

The 70/30 rule for success

Content creation has been one of the biggest areas we are testing A.I. This falls under the topic of “generative A.I.,” where we ask an A.I. to write a blog post, a social media post, or the script for a video.

We like to take the 70/30 rule: we consider whether the A.I. can complete 70% of the content and a human editor can do the remaining 30% to bring it over the goal line. If you had someone on your team who could reliably write 70% of a blog post from a good prompt, you should keep them around. The same logic should be applied when evaluating A.I. Many A.I. platforms today can meet this standard, and “they” will only improve with time.

However, if you expect any A.I. to create a finished product in one go, you will be underwhelmed with the result. And we have seen RIAs abandon AI-powered projects because of misaligned expectations.

In any writing project, getting from a blank piece of paper to a solid first draft can be the most time-consuming part. And this is part of the workflow where we are using A.I. the most today: getting a good draft in place that subject matter experts can edit, refine, build-out, and fact-check.

Seeking prompt engineers

A.I. is just a tool. There are great carpenters and there are bad carpenters. All carpenters use the same basic tools, which highlights one thing often overlooked in A.I.: the quality of the prompt being provided to the A.I

The phrase “garbage-in-garbage-out” is used a lot in data analysis. The statement means that if you provide a model with bad data, the output and analysis will also be bad. The same holds for A.I. prompts, and we are already seeing some universities offering courses in “prompt engineering” to help people understand how to “feed” A.I. platforms good prompts that set them up for success.

RIAs do not need prompt engineers to implement A.I. However, they need people interested in testing out different things and evaluating the output over time. The firms finding the most success with A.I. today, find it fun to push the limits of what an A.I. can do and find where it breaks down. 

We encourage any firm thinking about A.I. to bring this mindset to the table, as it only leads to the greatest efficiency gains. If you have any questions about this blog post or how your RIA can integrate A.I. into your workflows, we invite you to connect with our team.



Great people make it possible

Our team knows the RIA space, but we also like to think outside the box. This means that we have the experience to do things different.