Q&A for the Boston May 2011 Text Analytics Summit

I’ll be co-presenting “Case Study: Analyze The Mind Using Text” at the 17-18 May 2011 Text Analytics Conference in Boston, MA, and was asked to answer the following questions in preparation. Sharezees! Sharezees!

7th Annual Text Analytics Summit in Boston 18-19 May 2011The questions are in bold italics, my responses bland and neutral, as always…

How can text analytics help you better understand your customers?

There are two main areas; incoming and outgoing messages.

Incoming customer messages – understand the thoughts behind the words. It’s not enough to understand the words themselves, you need to understand the energy, the emotion, … I’m tempted to use the now exhausted term “sentiment” but that word has been so bastardized and misused as to no longer have any real meaning.

If we use ‘sentiment’ as psychologists, anthropologists and psycholinguists use the term, then we need to understand the thought patterns that are demonstrated by the incoming messages, the behavioral patterns that these thought patterns manifested hence are being demonstrated by the sending of the message and any else that might occur, and the motivations for same.

Now comes the trick that is so often lost in today’s business world; if we believe we know our customers better than we know ourselves (via any tool or technology) then we tend to grow contemptuous of them. We believe we understand their motivations, their desires, their hopes and dreams, and that’s both foolish and expensive.

So the trick is to understand what’s truly being communicated in our outgoing messages to our customers. Part of this is learned via observing responses in the market. There is an old adage in semantics and semiotics; the meaning of the message is the response it elicits. Early on we were demonstrating that a great deal of creative and designer output is overtly conveying the product’s or service’s value proposition and covertly communicating the thoughts of the creative and design groups themselves, sometimes to the detriment of the company. We’ve demonstrated this in education, in small business and large and while the recent interest in “neuro” has brought it forward in people’s thinkings, there’s still not a lot people are doing about it.

So we need to understand what’s really going on inside people’s hearts and minds — that’s incoming messaging — and we really need to be sure of what we’re putting into their hearts and minds — that’s outgoing messaging.

How important is it to understand consumer sentiment?

There’s the word again, “sentiment”. Tell me what you mean by it and I’ll tell you if it’s important or not. We have custom designed “sentiment” tools, crafted according to client specifications, and we know our tools are being rebranded and sold for 20-100x what our clients pay (which is fine. We don’t have to manage end-customers).

All of our tools started out by asking clients, “Forget what the industry is saying ‘sentiment’ is, what do you want to know? If you could create your own ‘sentiment analysis’ tool, what would it report? We’ve documented that conversation and the resulting tool in several places. The result is that our clients have the ability to know who’s an influencer, how far their information is going to travel, how long it’ll be out there, who it’ll influence and in what direction, which direction audiences are going to go and when, how something will be accepted and where, what modifications are necessary for a product/service to fly or flop, …

So if those things are important, then understanding consumer sentiment is important to you.

What can text analytics do for the social media world?

See the above.

What industry (or industries) is using text analytics the most, and how do you see that changing over the next 5 years?

National intelligence agencies and their civilian counterparts, marketing intelligence agencies are using it the most. The influence is increasing as more and more businesses are coming to terms with “neuromarketing without the wires”, some of which involves this kind of analysis.

Q&A for the Technology Driven Research Event in Chicago, 2-3 May 2011

Hello again. Sorry not to have posted here in a bit. We’ve been a little busy.

In any case and as often happens, I was interviewed for the upcoming Technology Driven Research Event in Chicago, 2-3 May 2011. Here’s a transcript for your reading pleasure. The questions are in italics, my responses in plain text.


Q: NextStage Evolution offers technology that understands human thought through any machine interface; that seems to be almost a Rosetta Stone for market research! Can you tell me a bit more about your approach and how it works?

A: The answer depends on what you mean by “works”. One version of it “works” by putting a little javascript tag on a client’s site (in the case of our visitor analytics tools). A completely different and equally truthful answer is that it “works” by having a very sophisticated understanding how people behave when they’re being themselves, quite similar to how human beings non-consciously understand each other through years of interacting with each other.

For example, you walk through a mall, glance at someone and “intuitively” know their gender, age, and can make some amazingly accurate guesses about their background, lifestyle, education, income, likes and dislikes, so on and so forth. You do this and your “guesses” have an accuracy that would make IBM’s Watson look like a low grade moron because Watson knows facts and can connect them but it doesn’t have experience, specifically human to human experience.

My research into such things started back in 1987. I was listening to some educational psychologists talking about a problem in that field. It triggered something in me, basically that there was a way to model how humans learned about each other, a way for a computer to go through the different stages of social learning that humans go through from birth throughout the rest of their lives. This model eventually became a set of rules similar to the sets of rules humans use when they interact with each other. When two people meet an incredible number of factors go into deciding the level of intimacy they’ll share. The decision to work together, play together, live together, etc., can be thought of as a “sum of the parts”. Different levels of intimacy are determined by the number of parts in the sum, whether the result is positive or negative, how positive, how negative and so on. Humans recognize one individual from another by summing all the available parts and matching that sum to a sum of the person they have in memory. Are the sums relatively equal? Then you know this person. Not so equal? Then either you don’t know this person or this person has changed and if so, do you still want to know them? This storage of sums became our first breakthrough, the identity-relational model. It mimics how people know each other and was scalable.

So you could say I was teaching the computer facts but instead of facts like “Barack Obama is the 44th President of the USA” — essentially an equation, A = B — I was teaching the computer social facts, what makes up human social intuition, things like “Sometimes when a person looks down and sighs heavily it means they’re sad, sometimes it means they’re tired, sometimes it means …”, and all these “sometimes it means” can be thought of as sums of the parts.

I remember telling those edpsych people that they’d never solve the problem from within their own discipline (I love Einstein’s “We can’t solve problems by using the same kind of thinking we used when we created them.”) and true to my word, to make our technology “work” I borrowed from disciplines so far removed from the traditional paths that, when I created the first working model of our technology, a friend counted elements from 120 “unrelated” fields involved.

We created a new data architecture, the identity-relational model, and some new mathematics to work it, and so far have two patents on how our approach “works”. If any of your readers are familiar with Feynman Diagrams, we made Feynman Diagrams of human interaction, human emotion and behavior, of social systems and social dynamics.

The end result is that our technology can read a document, watch a video, listen to a podcast and determine traditional demographics (age, gender, etc) of the best audience for that material. What’s amusing is that there’s usually a lot of difference between the audience marketers are targeting and the audience their creative is actually targeting.

Further, our technology can determine author intent, as in “what did the author really hope to achieve with this material?” Most companies are amazed at how many non-conscious messages marketers and creative plop into their content, or how strongly those non-conscious messages affect audience response.

On a more technical level, our technology can report on both author and audience RichPersona, a fairly complete description of their cognitive, behavioral and motivational psychologies. This is useful for marketers because it reports how the audience will respond to some creative, when they’ll respond (intender status), why, what exactly will cause the response, how to shape the response to the client’s needs, and demographically who. We’re currently betaing a “SampleMatch” tool that uses these aspects to help companies create test communities for their products and services.

Another part of our technology can observe website visitors and determine demographic and psychologic factors without cookies, without forms, without interrogating the visitor or other internet databases in any way, with no other equipment other than a browser session active (no cameras, no harnesses, no scanners, no …) with the visitor interacting in the most normal settings (sitting in their home, on the bus, in the mall, …) doing what they want, and our technology does this in real-time. An independent test determined that our technology was 98% accurate determining visitor age and 99% accurate determining gender simply by observing how visitors navigate a web site.

And that brings us back to someone sitting in a mall and making highly accurate guesses about people they see just by watching them. What NextStage does is recognize that visitors are “walking” through a website and our technology is the person sitting in the mall, watching others walk past and making highly accurate guesses.

This technology has been in use since 2001.

Q: What do you think are the major drivers of change in the market research space right now and how is NextStage Evolution planning to take advantage of those trends?

A: Major drivers of change…One is definitely the market itself. When audiences demand change suppliers must change in order to keep and increase their audience. An interesting example of this “audience-demand/market change” cycle is what’s happening in the Middle East (as I write this). The suppliers are the various governments, the audience is each country’s population and the market is each country’s economy. The audiences are demanding change and the suppliers — the governments — must change in order to serve those changing audiences. In a more traditional marketspace, if suppliers don’t change then the audience finds another supplier. Extreme cases are when a market fails and a new market takes its place. Some countries have been fairly successful at changing their marketspace and many of the former soviet economies are examples of this.

Another driver is the increasing accountability requirement of analytics. I wrote a three part blog (it starts with The Unfulfilled Promise of Online Analytics, Part 1) based on a long study of people’s attitudes towards online analytics and one of the outstanding elements in there dealt with “accountability”, specifically that no one really wants to be held accountable (surprise!). I wrote Why Isnt Marketing a Science, Part II about how marketing is being forced into an accountable model and that it’s kicking and screaming all the way.

But I do think marketing is going to have to become accountable because executives are demanding more and more of their marketing dollars as audiences — thanks to the ‘net itself — have become increasingly vocal and demonstrative. As I noted above, the audience is changing therefore the market will change.

In a way “marketing” suppliers are always changing. Every time somebody comes out with a “new” way of calculating something they’re offering a change in the market. I love 140Sweets co-founder Anna OBrien’s “Random metric names and symbols is not an equation” statement because it demonstrates a need for accountability in the analytics marketspace.

The latest change attempt is “neuromarketing” and as always the unspoken claim is “now we’re accountable”. Accountable? Great! But now the consumer has to ask the next set of questions; Accountable regarding what? Accountable to whom? With what kind of repeatable accuracy over time?

Shoving someone into a physically restrictive environment such as an EEG or fMRI, or sitting them in a chair with their head locked into an eye-tracking mechanism, etc., definitely provides data and does anybody honestly what to state that such methodology is demonstrative of the consumer’s real-world experience? It’s the difference between “Someday I’d like to learn how to dance” and taking dancing lessons. The latter teaches you what actually has to be done, the former demonstrates how well your brain can mimic (“imagine” or “remember” might be better terms) a concept it has called “dancing”.

The difference is that such methods provides data about (what I consider) extremely synthetic situations. Nobody engaging in commerce — e, intellectual, social, etc — does it strapped in some kind of synthetic environment unless the investigators are willing to accept synthetic results.

This brings us to how NextStage is poised to take advantage of those trends. I suppose the first is some 20 years of research into these things. By “20 years of research” I mean 20 years of studying how people interact with information presented via machine interfaces, about the last 15 or so of those years we’ve been studying how people interact with the web and about the last 6-7 we’ve been studying how people interact with mobiles. So the first thing is that we have direct experience with how people change their habits as their tools (desktop to laptop to netbook to mobile, web to 2.0 to 3.0 to x.0, Genie to AOL to email to Facebook to Twitter to …) change, we’re not talking about taking data from completely different models (Network TV or Print, for example) and saying “This is what happened here so it’s what’s going to happen there”.

So when it comes to accountability, between the patents, the scientific conference presentations, the peer reviewed publications, the kudos we’ve garnered since we started, the ongoing research, …, NextStage is pretty well covered.

NextStage also has a fairly decent lock on adapting to market and audience change because our technology is a basic (I’ve also heard the term “platform”) technology. One of our first investors said, “You’ve created plastic. It doesn’t matter if someone wants a baby bottle or a car dashboard because your technology can be shaped to whatever people require.” This belief is demonstrated by the fact that the majority of our tools came from client requests. We’d be in meetings and someone would say “It would be great if we could figure out…” and one of us would think about it and a few days later a prototype tool would be ready for testing. An example of this “if only we could figure out” attitude is demonstrated in Sentiment Analysis, Anyone? (Part 1). We said, “Forget about what ‘sentiment analysis’ tools do, tell us what you want done”, we created the tool along those lines and its been one of our best sellers ever since.

Another way we’re taking advantage of market changes is the price point of our tools. Right now, most senior level execs don’t use our tools because most aren’t willing to risk their jobs on a (relatively) low price point tool. It’s like going to the bank for a loan and not being able to make payments. You borrow 20k$US, can’t make the payments and it’s your problem, you borrow 20m$US can’t make the payments and it’s the bank’s problem. The same rules apply. A 100k$US solution goes wrong and it’s the vendor’s problem, a 499$US solution goes wrong and it’s the exec’s problem. This “who owns the problem” challenge is compounded by our established accuracy. What do you do if you go with a low cost solution that’s documented with a 90%+ accuracy and it doesn’t work? You look for a new job.

Where all of this works for us is that we’re the darling of mid-level management. They have discretionary spending that’s right in line with what our tools cost and they don’t have the responsibility of their management seniors. They can expense 10-499$US, get a result, report it and be done. There’s no budgetary delays, procurement meetings, tactical planning, resource allocation, etc., and it’s up to senior management to act. This is a win-win for us, especially since people who use us take us up the ladder when they move on to a new position.

So there you go and I hope it’s useful. Please let me know if you need more or other.

Joseph