Age-related difficulty recognizing words characterised by brain differences

Older adults may have difficulty understanding speech because of age-related changes in brain tissue, according to new research in the May 13 issue of The Journal of Neuroscience. The study shows that older adults with the most difficulty understanding spoken words had less brain tissue in a region important for speech recognition. The findings may help explain why hearing aids do not benefit all people with age-related hearing difficulties.

Although some hearing loss can be a normal part of aging, many older adults complain about difficulty understanding speech, especially in challenging listening conditions like crowded restaurants. Research has suggested that this decline in speech recognition is independent of hearing loss.

Harris and her colleagues found that structural differences in the brain's auditory cortex predicted performance on the task, even when they controlled for hearing loss. The older adults who had the most difficulty recognizing words also had the least brain volume in a region of auditory cortex called Heschl's gyrus/superior temporal gyrus. However, the relationship between the ability to identify words and the volume of auditory cortex was also present in younger adults.

Hands-Free Car Kits Not Truly Hands-Free

The Strategy Analytics Wireless Device Lab service research, “Hands-Free Car Kits: Consumers Lack a Truly Hands-Free Experience,” shows that purchasers of after market car kits in the UK would like to use speech recognition in order to make their car kit experience truly hands-free; but speech recognition systems fall short of expectations.

 

These findings are based on in-depth one-on-one research sessions with participants near London, England.

“Strategy Analytics research shows that consumers would like their car kits to provide easy and intuitive hands-free methods for dialing and answering their cell phones while driving,” commented Chris Schreiner, Senior User Experience Analyst at Strategy Analytics. “However, consumers struggle with speech recognition due to usability issues.”

Kevin Nolan, Vice President of the Strategy Analytics User Experience Practice, added, "Our research also shows that streaming music is a service in some car kits that adds value for the consumer, although consumers prefer to experience music via direct hookup through their vehicle speakers rather than via an FM transmitter.”

Hosted Solutions Can Breathe New Life into Legacy Systems

So it’s Spring 2009, and your contact center development budget has already been cut due to expected revenue downturns. Yet your business partners keep knocking at your door for new applications, because they’re being asked to get creative with their revenue generation and retention efforts. And, you’re being asked to pull costs out of your expense budget, which inevitably comes from IT or development headcounts. Maybe your contact center is contracting, too.  All of these conflicting pressures and market changes are forcing companies to seek out new options. Contact centers and IT organizations can get more functionality with less investment by blending their current solutions with many combinations of hosted or SaaS (News - Alert) solutions, from call routing to CRM to unified communications (UC).

(more in source article)

Australia: NAB selects VeCommerce

Telstra and VeCommerce (a Salmat company) announced that Telstra had sold a VeCommerce speech recognition solution to National Australia Bank (NAB).

The new solution is designed to improve NAB's telephone banking experience - ensuring customer enquiries are directed to the most appropriately skilled banker in the most efficient way.

Telstra led the sale and implementation of VeConnect as part of the bank's new customer service initiative. This initiative sees the launch of a single telephone number (136 NAB) to cover all of the bank's customer enquiries. Through the use of VeConnect, an advanced Natural Language Speech Recognition application from VeCommerce, callers to NAB will be routed to more than 150 destinations within the bank by simply stating their request.

Speech Recognition Software Hits Blackberry

SHAPE Services GmbH today announced the availability of a speech recognition feature in IM+, the mobile instant messenger. SHAPE has joined forces with Yap Inc., pioneer of the first fully automated voice-to-text platform, to add the first of its kind, thumbs-free message dictation feature to IM+ for BlackBerry(R) smartphones.

IM+ with speech recognition will allow users to dictate their instant messages and send them as text to their contacts in Facebook(R), AIM(R)/iChat, MSN(R)/Windows Live(TM) Messenger, Yahoo!(R), ICQ(R), Jabber(R), Google Talk(TM) and MySpaceIM. Recent studies indicate that over 70% of consumers prefer using voice to interact with their mobile devices rather than typing. Speech-enablement increases message creation speed and greatly improves the usability of instant messaging on mobile devices.

M*Modal's Advanced Speech Recognition Technology is Incorporated into Scribe's Web-based Document Solutions

M*Modal today announced Scribe Healthcare Technology has incorporated the company's Speech Understanding technology into its web-based medical dictation, transcription and archival solutions.

Scribe's technology offerings simplify the business of medicine by providing web-based solutions for clinical information production, workflow management and analysis to healthcare providers and medical transcription service organizations that service them.

Telephonetics wins Empire Cinemas multi-year contract

Speech recognition and voice automation specialist Telephonetics has signed a multi-year contract with Empire Cinemas to supply its MovieLine automatic speech recognition (ASR) ticket booking and IT system to all of Empire's 17 UK cinema sites. 

The company grabbed the Empire deal from rival firm Eckoh, which has been working with Empire for the last three years. 

Windows Mobile gets enhanced voice-command capability

Microsoft Corp. has high hopes that a new speech-recognition application for the forthcoming Windows Mobile operating system will be attractive enough to draw people to the phone platform.

Microsoft today planned to announce a new service that will work on Windows Mobile 6.5 devices and will let people speak into the phone to search the Internet, make phone calls and dictate text messages. Thetechnology comes from Tellme Networks Inc., a company that offers hosted voice recognition services and was acquired by Microsoft in 2007.

TellMe cuts the cord to Nuance

TellMe just had their best quarter so far. It's taken them over two years to upgrade their platform to lose reliance on Nuance technology.

When TellMe was founded in 1999, they used speech recognition technology produced by the original Nuance. Over time, they upgraded their platform and continued to use Nuance technology even after ScanSoft bought out Nuance and changed it's name to Nuance. 

Now, TellMe has announced vast improvements to their platform, "the most substantial ... since Microsoft bought it in May 2007." and "the improvements ... take advantage of cloud computing..."

The article states that "The improvements  include speech recognition technology developed by other units of Microsoft." 

Sensory Releases Speech Recognition Development Kit for iPhone

Sensory, Inc. announced that it has ported its FluentSoft Speech Recognition Software Development Kit to the Apple iPhone platform. iPhone developers can now create applications that feature large vocabulary speaker-independent recognition command and control capabilities. Using a proprietary text-based phonetic engine, the FluentSoft SDK allows custom tuning of speech recognition sets containing thousands of words or phrases without the need to verbally train on the phone.