From “Secret Sauce” to Market Leadership
Here is the third installment in my series about maximizing “secret sauce” in tech marketing.
I thought I’d wrap the series with a few final thoughts, as well as one more case study.
To Brand or Not to Brand
If your secret sauce is so special, and can really make your company a market beater, why not give it a name?
You need to call it something. In the second post, I explained how to describe your IP in simple and clear terms (see the case study below for another example). But you will not always have the chance to use a long form description. Some companies use brands, i.e. trademarkable names, and others, more general terms.
Google PageRank is an example of the former. Regarding generic phrases, consider Amazon’s one-click ordering, and Identiq’s provider-less trust network.
There are tradeoffs regarding branded names vs. generic labels. Startups need to weigh the overhead of establishing and supporting multiple brands vs. the potential benefits of a catchy term that they can own.
And don’t think a great name alone is enough (see my example about Certs and Retsyn in the first post; giving a fancy name to a common ingredient worked for breath mints back in the day, likely not for tech companies today).
Making it a Verb, Playing Bigger
The ultimate is when your product or technology is so widely adopted that it becomes synonymous with that function or kind of product – sometimes even becoming a verb. Think Google or Xerox. This NY Times article explains how a brand name becomes generic.
In my first installment, I mentioned companies that parlayed core technologies into market leadership – like Cisco, with routing tech, Google, with Page Rank, and others. How did they do this?
Sure, it starts with great technology, of the proverbial disruptive variety. But you also need steadfast, focused execution and marketing to become a category king. I blogged about this approach, that was inspired by the book Play Bigger.
Mipsology (Deep learning inference acceleration)
Mipsology’s core tech, built into its Zebra software, makes it possible to run neural network deep learning models developed for popular Nvidia GPUs on other kinds of chips.
This means models that have been “trained” (fed large data sets to teach computers how to make decisions in “the wild” from new data) on Nvidia can run flawlessly on FPGA (field programmable gate array) chips– which offer certain advantages in terms of flexibility, lifespan, and tolerance of environmental factors.
Most who work in and write about AI understand the importance of inference acceleration. That’s because the NN models are ineffective if they’re too slow making decisions in real-world situations (just consider the need for speed when it comes to vision intelligence for real-time guidance in robotics, telesurgery and autonomous vehicles).
However fewer understand the nuances of neural network models on GPUs vs. FPGAs. And it seemed a tall order to explain how Mipsology can take a model developed for Nvidia and accelerate inference on a Xilinx (or other) FPGA – without the need for any additional programming. Plus, the Mipsology executive team was concerned that the company would tip its hand to the competition by trying to explain this.
“The best way to see [Mipsology] is that we do the software that goes on top of FPGAs to make them transparent in the same way that Nvidia did Cuda CuDNN to make the GPU completely transparent for AI users,” said Mipsology CEO Ludovic Larzul, in an interview with EE Times.