The prejudice of Questionnaires

I make no secret of the fact I loathe filling in Questionnaires for feedback/research, even though I usually take up the offer to do so.

Typically they are poorly designed and riddled with the questioner's own perceptions and slant - they have a particular set of things they want to measure, the questionnaire asks about those things in a way guarantees those end measures. Thus, the questions become a self-fulfilling prophecy as far as the meaning in the data is concerned.

Here's today's example which has irked me sufficiently to abandon the questionnaire part way through. It's from a retail website where I was looking to buy an iPad2 in time for Christmas. A few things about the experience were not great, though this organisation has some helpful people on twitter that gave me good suggestions; so I was keen to give balanced feedback.

But here's the question:

Again thinking about the main thing you were looking to buy, which one of these would you say was the most important in deciding which product you wanted to buy?

 

Perhaps for most products this question makes sense - purchase choices are made predominently on the basis of one or two of the above characteristics. However, not so with the iPad, or pretty much any Apple product for that matter. The unique selling point of Apple, it's very "value proposition" if you like, is that it beautifully combines all three of the above elements. I am shopping for an iPad because it marvellously scores in look and feel, technical specs and functionality in a way that most (all?) of its competitors do not.

As such I can't answer the question meaningfully - I'd be telling the researcher something they are expecting to hear, not something they haven't allowed for and that I want to say.

They could have chosen a different format for this question "which aspects most influenced your decision?" with multiple selections available. They would still get a distribution of answers that would allow the most significant result to be drawn out. But by forcing a decision of one answer, this is actually skewing the results and applying the researcher's pre-conceptions and prejudice about what data needs to be reported into the actual questions.

This is bad design and leads to misinformed statistics.

 

 

Nuance Study Finds Automated, Live Agent Preferences

Nuance Communications has announced the findings of a commissioned study conducted by Forrester Consulting on behalf of Nuance titled, “Driving Consumer Engagement with Automated Telephone Customer Service.”  

It found that consumers rate automated telephone customer service higher than live agents for certain straightforward interactions. 'In five out of ten posed scenarios, consumers preferred automated telephone customer service systems over live agent interactions for tasks like prescription refills, checking the status of a flight from a cell phone, checking account balances, store information requests and tracking shipments.

Consumers’ satisfaction with customer service leaves a lot of room for improvement, too, the study found: 'Only 49 percent of U.S. online adults report being satisfied, very satisfied or extremely satisfied with companies’ customer service in general.' 

And we're just used to it by now: Automated telephone systems are 'an expected and accepted customer service channel,' the survey found, with 82 percent of US online adults having used an automated Touchtone or speech recognition system to contact customer service in the past 12 months.  

Humans 'hear' through their skin

Sensations on the skin play a part in how people hear speech, say Canadian researchers.

A study found that inaudible puffs of air delivered alongside certain sounds influenced what participants thought they were listening to.

Writing in the journal Nature, the team said the findings showed that audio and visual clues were not the only important factors in how people hear.

The findings may lead to better aids for the hard of hearing, experts said.

It is already well known that visual cues from a speaker's face can enhance or interfere with how a person hears what is being said.

In the latest study, researchers at the University of British Columbia in Vancouver wanted to look at whether tactile sensations also affected how sounds are heard.

They compared sounds which when spoken are accompanied by a small inaudible breath of air, such as "pa" and "ta" with sounds which do not such as "ba" and "da".

At the same time, participants were given - or not - a small puff of air to the back of the hand or the neck.

They found that "ba" and "da", known as unaspirated sounds, were heard as the aspirated equivalents, "pa" and "ta", when presented alongside the puff of air.

[source: BBC - see references]

40% of callers avoid speech systems wherever possible

 

Many consumers avoid using speech automated systems when calling customer call centres and prefer to use the Internet as their first port of call. In fact, one-third of consumers surveyed struggle to see any benefits to using an automated contact centre service, representing a rise on last year’s figures.

Most consumers also believe companies only use automated services in their contact centres to save money. Furthermore, two in five people claim they are unhappy with the automated systems’ ability to deal with queries.

These are some of the highlights of the 2009 Alignment Index for Speech Self-Service report releasedby Dimension Data in conjunction with Cisco, and Microsoft subsidiary, Tellme Networks Inc.

The report, which compares and measures consumer, vendor and enterprise perceptions of speech systems, reveals that of 2,000 consumers polled across six countries* some 40% - up from 36% in 2008 - said they avoid using speech systems “whenever possible”, while 50% said they use the Internet as their first choice for interacting with a business or organisation.

And with only 25% of consumers saying they would be happy to use speech solutions again, organisations are not winning their hearts and minds.

 

When using automated systems, over a third of consumers that were polled are most frustrated when a human agent requests they repeat themselves after they’ve already provided information to the automated system. And 19% of consumers say that they are most annoyed when the system doesn’t recognise what they’ve said.

On the other hand, companies that have deployed speech recognition are fairly optimistic about the long-term viability of such systems for customer service. They believe the path to improving customer satisfaction with speech recognition lies in making it easier for consumers to use the systems.

Looking at consumer behaviours, the report statistics indicate that attitudes toward customer service among the younger age groups are changing. Over half of consumers between the ages of 16 and 34 use an online channel for their customer service needs, and this will continue to place more pressure on companies to design customer service solutions that provide choice, accuracy and speed.

 

 

Doing what the brain does: how computers learn to listen

 

Researchers at the Leipzig Max Planck Institute for Human Cognitive and Brain Sciences and the Wellcome Trust Centre for Neuroimaging in London have now developed a mathematical model which could significantly improve the automatic recognition and processing of spoken language. In the future, this kind of algorithms which imitate brain mechanisms could help machines to perceive the world around them.

Many people will have personal experience of how difficult it is for computers to deal with spoken language. For example, people who "communicate" with automated telephone systems now commonly used by many organisations need a great deal of patience. If you speak just a little too quickly or slowly, if your pronunciation isn't clear, or if there is background noise, the system often fails to work properly. The reason for this is that until now the computer programs that have been used rely on processes that are particularly sensitive to perturbations. When computers process language, they primarily attempt to recognise characteristic features in the frequencies of the voice in order to recognise words.

"It is likely that the brain uses a different process", says Stefan Kiebel from the Leipzig Max Planck Institute for Human Cognitive and Brain Sciences. The researcher presumes that the analysis of temporal sequences plays an important role in this. "Many perceptual stimuli in our environment could be described as temporal sequences." Music and spoken language, for example, are comprised of sequences of different length which are hierarchically ordered.

According to the scientist's hypothesis, the brain classifies the various signals from the smallest, fast-changing components (e.g., single sound units like "e" or "u") up to big, slow-changing elements (e.g., the topic). The significance of the information at various temporal levels is probably much greater than previously thought for the processing of perceptual stimuli. "The brain permanently searches for temporal structure in the environment in order to deduce what will happen next", the scientist explains. In this way, the brain can, for example, often predict the next sound units based on the slow-changing information. Thus, if the topic of conversation is the hot summer, "su?" will more likely be the beginning of the word "sun" than the word "supper".

 

 

Big brother untangles baby babble

In 2005, the artificial intelligence researcher at the Massachusetts Institute of Technology (MIT) Media Lab set out to understand how children learn to talk.

"We wanted to understand how minds work and how they develop and how the interplay of innate and environmental influence makes us who we are and how we learn to communicate."

It was a big task and after years of research, scientists around the world had only begun to scratch the surface of it.

But now, Professor Roy is beginning to get some answers, thanks to an unconventional approach, an accommodating family and a house wired with technology.

And the research may even have kick-backs for everything from robotics to video analysis.

 

T3i Group Predicts Healthy Growth In IVR Market Driven By the Synergy of New Applications and Technology

According to T3i Group's latest research, the global interactive voice response (IVR) market, which includes speech recognition, will grow to $514 million by 2013, up from an estimated $431 million this year, due in part to the growth in voice XML (VXML) technology.

The new "InfoTrack for Converged Applications 2008 IVR Market Report" found global IVR shipments from the top 11 vendors exceeded 625,000 ports in 2008. The top three vendors based on ports shipped were Nortel, Genesys and Convergys; and the revenue leaders were Avaya, Nortel and Genesys. T3i Group said North America led all regions but with considerably less than 50% of the market, followed by the Europe Middle East Africa (EMEA) and Asia Pacific (APAC) regions, respectively.

T3i Group segmented the analysis in this report by technology, applications and vertical industry.

Among the key findings:

  • 95% of IVR ports shipped in 2013 will support VXML, compared with less than 75% today. VXML enables Web sites to offer the same text-based applications, such as order entry, with speech recognition.
  • The top three IVR applications are incoming call handling for contact centers; inbound self-service transactions; and outbound calling, such as appointment confirmations, collections reminders and repair notifications.
  • As vendors and enterprises integrate IVR into more comprehensive customer-care solutions, IVR ports shipped specifically for inbound calls to contact centers will decrease nearly 10% each year to 2013.
  • In comparison, IVR port growth will be driven by outbound applications at a rate of almost 12% annually through 2013.
  • DTMF (analog voice) port shipments are declining, while shipments of speech ports, which recognize speech or convert text to speech, will hold an almost 2:1 advantage by 2013.
  • IP/SIP port shipments are growing strongly year over year; by 2013, only 10% of all IVR ports shipped will be TDM, compared with 42% today.

 

Avaya Receives Highest Rating in Report on IVR Systems and Enterprise Voice Portals by Gartner

Avaya Inc. today announced the company has received the highest rating -- a "Strong Positive" -- in Gartner's recent "MarketScope for Interactive Voice Response (IVR) Systems and Enterprise Voice Portals, 2009"(1). The report evaluated leading vendors' voice response systems and applications, and rated vendors according to a number of criteria.

In the report, Gartner classified systems into two distinct platforms -- Interactive Voice Response (IVR) and Voice Portal -- and rated vendors based on evaluation criteria such as Market Understanding, Marketing Strategy, Sales Strategy and Overall Visibility. A Gartner MarketScope report provides specific guidance for users who are deploying or have deployed products or services. The report's evaluation is based on a weighted evaluation of a vendor's products in comparison with the evaluation criteria.

Study: TV can impair speech development of young children

A new study adds to the debate over whether television impairs children's language development.It found that parents and children virtually stop talking to each other when the TV is on, even if they're in the same room.

For every hour in front of the TV, parents spoke 770 fewer words to children, according to a study of 329 children, ages 2 months to 4 years, in the June issue ofArchives of Pediatrics & Adolescent Medicine.Adults usually speak about 941 words an hour.

Christakis and his colleagues fitted children with digital devices that recorded everything they heard or said one day a month for an average of six months. A speech-recognition program, which could differentiate TV content from human voices, compared the number of words exchanged when televisions were on or off.