It had never occurred to me until last week's surgery that vomiting after waking up in recovery isn't a given. I've puked every single time I've had surgery on my ear going back to when I had tubes put in at probably 5 years old. I woke up from my first implant surgery in 2006 wondering why I had to even ask the nurse for a receptacle while trying to hold it in before she got it to me. Mentioning the fact last week to the anesthesiologist made all the difference.
Thursday, May 27, 2021
Friday, March 26, 2021
Although Bluetooth with my AB processor is not a completely new experience, something about having total direct wireless connectivity and the implementation of it changes everything. I did not have my ComPilot powered on all day with my Q70. It got in the way by interrupting at inopportune times in various ways. I couldn’t scroll through Facebook on my phone without it connecting and disconnecting despite my phone being muted. So “off” it went. Calls would come in and completely interrupt to great frustration while in mid-conversation. So, again, “off” it went. I’d wear it during the work week and only power it on when needed. Even powering it on or off meant my sound was going to be interrupted and cut out momentarily. I also had to wait about one minute for it to complete connecting to my Android phones before I could start streaming any music. Trying to do so any sooner would result in a stream that cut in and out as if it were an older car struggling to be started. On top of all of that, the stream could cut out very briefly if I wore the loop under my shirt, particularly those with collars, if I turned my head to the left.
I keep my phone on mute most of the time because I am an Apple Watch wearer. My phone will be unmuted when I happen to not be wearing the watch and want to be aware of any notifications from my phone. I hear everything coming from the phone as if it were coming from the speaker, but is heard only in my ear and without interrupting my external sound. Calls coming in, whether or not my phone is muted, will be heard as ringing just as any other phone, but also not interrupt my ability to hear in mid-conversation any more than a phone ringing externally. Conversely, when I am streaming music it is loud, clear, and dominant.
ComPilot users may be familiar with how using a 50% mix ratio would usually prove futile since any external sound would easily overwhelm the stream so it could not be heard. This meant going to a minimum of 75%, which reduced the ability to hear externally while the ComPilot was connected. You also had to pretty much max out your phone’s volume to get the stream level to be strong enough to be heard clearly (Signal to Noise Ratio.)
The M90s implementation resolves all of this. The default 50% level of surround mix coupled with a much stronger SNR means I can be walking along listening to music at full strength, come up on someone where conversation will ensue, simply pause the music using my Apple Watch (replacing the ComPilot’s big button) and I’m all ready to converse! No holding my finger up to communicate the need to wait for the ComPilot to finish its disconnect routine over the span of about 5 seconds so that I may hear them!
The AB Remote app’s slider control for the surround mix also gives me the ability to open it up enough to clearly hear both my surroundings and the stream mix equally well by sliding it to the left. If I really want that 100% stream mix (no external sound) I just slide it all the way to the right (the M90 doesn’t offer complete 100% streaming with no external sound at all, but the external level is extremely low and pretty much impossible to hear anything externally even if listening to spoken word or completely quiet sections.) The combination of the signal strength of streams and the ability to control the surround mix on a sliding scale (which can also be done using the M90 multifunction button while streaming) resolves two nagging issues from the previous generation processors: 1. No more creating program slots just for different mix ratios. 2. No more insufficient signal strength from the stream or inability to hear surroundings as needed.
Phone calls through the M90, or rather, the headset functionality where audio is heard in the processor and my voice is picked up has opened up where and when I can make or take calls. I previously only used the ComPilot on conference calls and mainly where I would not have to speak due to a combination of poor outgoing voice quality and a delay in hearing my own voice if on a 100% mix.. I never used it on video calls due to the delay between the video and audio and would opt for headphones or just using the phone’s speaker. The headset functionality of the M90 has me ready to take both types of calls at all times as well as able to participate in conference calls. Video calls are in sync with the audio. I find myself not needing to move to a quieter location when taking calls because I am able to hear the call so well, as well as hear myself speaking. Where I’ve needed to move has been for the benefit of the other end. It’s worth noting that what we hear while on a call as our surround mix is what the other end is hearing because the headset functionality switches to a single BTE mic (specified during programming as the “Bluetooth side” in bilateral/bimodal fittings) to pick up our voice. Those using two processors or going bimodal will hear the call on both sides, but only one side is picking up your voice. Keep this mind for turning that side away from noise the way you would as if you were holding a phone up to your ear. Also of note; the multifunction button while on calls only controls the phone’s volume for calls. It does not control the surround mix as specified in the M90 user manual. When increasing the volume of the call stream, the SNR increases and the call volume overrides the surrounding noise.
The one complaint I have in regards to phone calls with the M90 is the lack of processor volume control. Processor volume is not the same as the stream/phone volume. It is essentially the volume for our hearing. It should be set for optimal clarity and speech discrimination. Since all processor volume control while doing any kind of streaming is done so under the remote app, which is not available during any headset use/calls due to the protocol in use being in conflict with the protocol the app needs to use, we are unable to adjust the volume. The volume will be locked at whatever your default volume is under AutoSense. You cannot adjust your volume under AutoSense prior to a call and have that carry over to the call. The default volume should be sufficient, but if you are like me, your volume needs change as the day goes on or in response to weather and climate changes that affect internal physiology. This means that the default volume on calls can be insufficient for my needs. Over last weekend while on a beach walk using headphones, which I continue to enjoy for the different sound stage and bass, I was already up to a volume level of 3 in the app. I needed to make a call after the walk, so I switched back to the M90’s Bluetooth. The processor volume level for the call was too low for me to hear clearly after being on level 3, so I had to quit the call, reconnect the headphones, and resume the call with the headphones where I had full control over the processor volume. If I did not have headphones at that moment, the other option would have been to switch from the M90 to the phone while in the call so that it was routed through the phone’s earpiece and hold it up to my T-mic.
TV Connector has been a fun tool to work with! Unfortunately, I have not been able to incorporate it into my entertainment system due to technical issues that result in requiring I make switches in settings in both my TV and receiver every single time I want to use it, so I removed it altogether. I now just have it plugged in to the headphone jack on the receiver for when I want to use it, but this blocks all external sound. This is actually fine given my friends and I regularly do “silent disco” on weekends where we are all plugged in via headphones into the receiver (actually, we are all using wireless headphones to not be tethered.) This is for complete immersion in music since all of us are musicians that take our music listening seriously. I use an active headphone splitter with 4 outputs connected to individual Bluetooth transmitters to allow for all of us to hear wirelessly. Since I have two wearable subwoofers available, these are also thrown into the mix with one of my friends using one and I’m using the other. Both wearable subwoofers use Bluetooth to connect while using wired headphones from the wearable subwoofer to hear… normally. Where TV Connector comes in is I am able to be completely wireless by having the wearable subwoofer connected to Bluetooth and my M90 receive sound via TV Connector! There are no cables anywhere on me and I’m completely immersed in music! I can also set my surround mix so that I am clearly hearing the music, but fully able to hear around me. This can be amusing given that my friends will often get so wrapped up in the music that they forget they have headphones on and begin singing… so I’m getting the perfect mix of their voice and the actual vocals. They need to remove their headphones whenever we engage in discussion or just talk in general while I don’t need to do anything!
AutoSense 3.0 OS……… how do I feel about it? If you know me, you know I’m not one for automation or hand-holding. I was activated at a time when there were no features for hearing in noise so I learned to listen through it where possible. The first feature for dealing with noise, other than utilizing a lower IDR (which I never used,) was ClearVoice. I didn’t care for my sound quality fluctuating and I certainly did not want to listen to music through it. My everyday program is a wide-open, no-features program. My IDR is adjusted to be functionally larger than the default max of 80 db to being between 90 and 100 db. Switching to a ClearVoice program on the fly for noisy-environments did not prove to be helpful for my needs other than for those days I had a headache and simply wanted to reduce the noise level. In the waning days before I had it removed from my processor altogether, it had been relegated for use on flights or long car rides where I wanted some peace and quiet while still being able to hear.
The first game-changer to come along to help hearing in noise was the Q70’s UltraZoom. It filtered out background noise without actually changing my sound quality and was actually useful in improving speech discrimination in noise. It was routine to switch to my UltraZoom program when sitting down at a restaurant with others while ensuring I was seated optimally to capture those around the table. When the Q90 was released, new features were rolled out with it: StereoZoom, which was exclusive to the Q90, WindBlock, SoundRelax, and EchoBlock. I got to experience StereoZoom while trying out the CROS and discovered the next level in dealing with extremely noisy situations that would actually be useful to me. The other features were another story. EchoBlock required switching to a program for that purpose. I was not about to switch to a program before entering the restroom at work where coworkers would engage in conversation. SoundRelax can be tied to a stake and burned as far as I’m concerned. It isn’t helpful to have me questioning what the hell is wrong with my car door or why my ceramic coffee cup has turned to rubber if I’m going by auditory cues. WindBlock doesn’t actually help hearing in wind, but blocks the sound of wind blowing on the mics. This would be fine except that it is falsely triggered when using headphones or in the presence of just the right subsonic frequencies, resulting in my being unable to hear when there is no wind in the first place.
With all of that long background out of the way, I’m about to savage AutoSense 3.0 before I redeem it. AutoSense is marketed as making seamless changes without the recipient even noticing it. That’s a problem. You SHOULD notice it in the sense that you should know exactly what it is doing. I have a baseline for my bionic hearing that involves hearing everything. I know what I can hear and can’t hear in different environments. So when I’m at work and I’m not hearing intercom announcements, particularly when I’m standing right under one of the speakers, because AutoSense has decided it is noise I don’t need to hear, that’s not adding to my quality of life. When I’m putting on my high-end headphones over my T-Mic to do a music listening session and my sound goes from being lush and detailed to thin and barely audible, what has happened? I know exactly what is happening and how to resolve it. How does someone that is having everything thrown at them as their baseline for their new hearing know this? “It makes changes without you even knowing” doesn’t sound like something that should be said with a smile anymore.
Unfortunately, WindBlock and SoundRelax have pervasively been woven into AutoSense 3.0. The professional thinking is that these features only rear up to help you hear or make sound comfortable. How did I learn how pervasive they are? It took two tries over the course of a week to get them removed from my programs, which means I got to be out in the world, hear their hallmarks rear their heads, and contact my Audiologist to check my programs to verify I’m not crazy. Both of them need to be disabled under every single classifier, except for Music (and any streaming programs,) under AutoSense. There is no universal off switch and they are on by default. I mentioned the Music classifier because I did attempt to use headphones while on AutoSense. This should ideally switch to the Music classifier, which it did. At some point in mid-song it decided music had ceased because WindBlock was then applied and I could no longer hear. This would only happen if it had switched to a classifier that had WindBlock enabled.
The first order with AutoSense was to turn off anything that would alter my sound quality. AutoSense has the latest versions of UltraZoom and StereoZoom as part of its toolbox, so that makes it immediately attractive. With the undesirable elements removed, AutoSense was then set up to be usable. AutoSense has been set as my default program over the last few weeks to ensure I let it do its thing without forgetting to be in it. It has been a bit of a mixed bag. It has been difficult to nail down how it will behave from moment to moment, which I attribute to it working to do the best it can to optimize for each environment. I do trust it to beamform in environments that call for it to the point of feeling secure in tossing out the need to have a custom program with “Speech in Noise.” There have been moments where it is beamforming when I need to hear around me, but again, not consistently. It seems to be constantly working to decide whether I need to hear directly in front of me or perhaps need to hear the person sitting or walking at my side. My custom program with no features is a very quick click away for those moments where I need to take control back, but it’s been pretty infrequent!
Music, as already mentioned, is one of the classifiers under AutoSense and I love that fact! It’s actually interwoven into the processor at the streaming level, as well. I did wonder how much of a difference I would really hear from my default program which is ideally already set to hear music without any change. It does change things for so many others where lower IDRs are being used because it applies the widest parameters specifically for enhancing music. This was only possible previously by having a separate program for it, so the new system makes music more accessible than ever. There was one moment that stood out about a week ago while sitting in my car having a conversation with a friend out in a parking lot while music played over the stereo in the background. When we stopped speaking, the music swelled from being background and moved to the foreground, becoming lush and detailed. When we began speaking again, it moved back to the background. It was a moment that made me smile knowing I had just clearly heard AutoSense do exactly as it claims.
I also have the Music scene classifier set up as a custom program to compare with my no-features custom program. Overall, music with the M90 has been a step up all around. There are more subtle details being fleshed out in songs. It seems to have an even greater dynamic range and less compression than we’ve ever had before with the audio frequency range at the bass end seems to be slightly lower. This is in comparison to my no-features program, wide open IDR that I’ve used for years. I’m certainly looking forward to my first concert with my M90 (which is currently slated for September!) I do switch to my music program for dedicated music listening sessions since anyone speaking will interrupt it. This was presented while watching the Grammys two weeks ago. It’s not necessarily a negative as I can see being in a social environment like a bar where such switching is ideal.
As long as AutoSense gives me the ability to take the reins by switching to custom programs and remove undesirable aspects of it, I am happy with it. I appreciate no longer needing to switch to an UltraZoom program. Time will tell if I wind up moving it from the default position to the end position on my programs. The world has yet to return to normal, so the opportunities to truly test drive socially out in the world have not been as plentiful.
My left ear is a dead ear. I originally thought I would implant it as a “nothing to lose” approach back in 2006, but was discouraged by Dr. House from doing so since it was unlikely to offer good results. I listened to his advice and let him implant my right ear, the only hearing I ever had and obviously that choice is delivered over and over. Over the last few years I began to mull over the possibility of doing my left ear, despite the odds that it will offer much benefit in the way of actual hearing given that it has never even had actual sound, just vibrotactile. At some point along the way I made up my mind to go ahead and do it upon the release of the next processor. That time is here now with the M90 and I’m following through. Fifteen years later to the day will be the day I am bilaterally implanted. I was implanted on Wednesday, April 26th, 2006. I will be implanted on my left on Wednesday, April 28th. Wednesdays are the only day my Physician does surgeries. My expectations are more towards gaining the bilateral benefits of using two M90s rather than the actual hearing. I mentioned earlier how much I liked StereoZoom, a bilateral feature, and taking this step will add that level to my challenging situations toolbox. There is definitely nothing to lose.
A small part of me is excited at the prospect of just maybe being able to experience stereo hearing or even surround sound for the first time in my life! It will be like going from seeing in one dimension to 3D. It will also give me insight into the challenges others have faced with results that do not offer rockstar activations like I had with my right ear. I am absolutely ready to take this new journey.
Friday, January 12, 2007
Maybe it's the wider pulsewidth Aniket programmed for me before I left that last day. Maybe it's the fact all 16 of my electrodes are back on now after electrode 1 had been turned off since last summer in response to the unusual impedence I was experiencing on that electrode, giving me 7 additional channels over what I had in the trial. Maybe I'm picking up right where I left off with F120 a month ago without having to start over. When it comes to the reduced speech discrimination I was experiencing with F120 during the month long trial, it's not manifesting.
On Monday, my Harmony was loaded up with F120 on Full BTE Mic on Slot 1, Hi Res P with the 1st electrode turned off as before on Slot 2 for me to fall back on if needed, and Aux Only (Direct Connect or T Mic Ear Hook only) input with F120 with the AGC turned off on Slot 3. My PSP was also loaded up similarly.
Before leaving I was tested in booth using both Hi Res P and F120. I don't really remember what my single word score was other than it had improved over last time around because I was paying more attention to the fact my dismal 17 percent HINT Sentence score from my last visit to HEI shot up to 57 percent this week while using F120. Dawna, my dear audie, was floored herself. The thing is, it was still "settling in" while I was in that booth so I can only imagine what it's going to be at my next visit in May.
Talk radio is really starting to get easy to listen to. Keep in mind, my listening to KFI AM 640 takes place in my car which is ridiculously noisy due to the fact the car is basically falling apart. Then there is the fact my antenna is broken so reception is hard to come by...it comes thru best crackling on the Freeway and it goes away every time I go under an overpass. I've mentioned these details before. What's different is I'm no longer just understanding groups of words here and there or phone numbers...I'm pretty much locked in to what I would guess to be about 95 percent of what comes over the airwaves. It's been almost amusing to me as I listen to the commercials and news bits come through.....it takes me back to being 5 or 6 years old riding in the car with my Mom and brother to school in the morning while listening to "Chicken Man" (a parody of Batman) on the radio. I've mostly avoided the radio since those days. It was Wednesday that I realized the jump in speech discrimination while listening to the radio as I wound up listening to the entire Presidential speech while driving to work. I'm now making a point of just listening to the radio on my drives to and from work rather than music just for the fact it's really opening up those listening pathways.
Music takes over once I get to the office and my experience with it through F120 has been just as exhilarating. For 8 hours, Music is piped directly to my brain from my Creative Zen MP3 Player. The first few days I heard the familiar "roughness" at times that I've experienced with all of my previous strategies, but each day over this past week has also shown a smoothening of that roughness. I did have a great moment when I realized that the Everly Brother's "Cathy's Clown" sounds exactly like it does with normal hearing! There is an instrument part following the lyric "I die each time.." and "I hear this sound.." which I believe is a brass instrument that was absent with Hi Res and I couldn't really hear it all of these years with my hearing aid either. I remembered it and knew it should be there..25 years later there it is with F120. This was one of the songs embedded in my head that I had planned to use to test the CI for accuracy and it passed.
I don't think I'll be needing that HI Res P back-up on Slot 2.
I die each time
I hear this sound
here he comes
that's Cathy's Clown
Cathy's Clown - The Everly Brothers
Wednesday, December 13, 2006
In the week leading up to the warm-up gig, I began to get acclimated to my new hardware and software. I fell in love with the Harmony from day one. The first night I was home from AB, it was an unseasonably hot day in Southern California. While at home that night, I found myself walking around shirtless with total freedom to turn my head without fear of the headpiece being ripped off for the first time in months! If I had tried this before, the friction from the wire rubbing on my bare back would have limited mobility.
Having worn a hearing aid for so many years, I felt far more confident with something sitting on my ear in the same manner than I did with a headpiece. The Harmony headpiece (the same as with Auria) is smaller and sleeker than the PSP’s as well as only a short wire connected to the Processor. To those looking at it, they would either see a hearing aid or bluetooth device, allowing me to either deal with a familiar attention-getter or simply blend in. Actually the fact that it’s black (well, technically it’s “dark sienna” but I prefer to just call it black) goes a long way in adding to the cool factor.
The T-Mic was one feature I had really been looking forward to trying since I had done my initial research prior to implantation. Unfortunately, I quickly found out that it would not work with my Sidekick II cell phone because it picks up the electronic pulses it emits. I could listen through it, but it wasn’t fun to do. I had already not bothered with asking the Tele-coil be activated on the processor since I knew the Sidekick wouldn’t work with that either already from trying it with my hearing aid tele-coil prior to getting the CI. That left me with either taking the hands-free ear bud as I had been with the PSP and placing it in my ear so the T Mic could pick it up, or holding the phone to the BTE-Mic placed at the top of my ear.
Listening through F120 that first week was an adjustment. As with all strategies, my brain had to adapt to it before it would begin to maximize it’s potential. My speech discrimination ability took a noticeable couple of steps back, which I found quite frustrating. This is normal when starting a new strategy be it CIS, SAS, MPS, or Hi Res. You start over each time. What was different with F120 from starting a new strategy for me was that it was more readily natural and pleasant sounding. There were no cars whizzing by sounding like mosquitos, they sounded exactly like they should. There was less of that graininess of sound one gets when a strategy is settling in. Yet, I could tell that I was only beginning to learn to process the stimulation. This wasn’t going to be “instant-on.” I had a learning curve to overcome.
Music did change. The first thing I noticed was those guitar solos seemed to have much brighter “color” to them on top of the fact that they just jumped out more than with previous strategies. The second thing I noticed was the relatively rich bass of my Hi Res P program I had waiting for me on Slot 3 that my F120 program was built up on was not too present. So I didn’t feel anchored to a bottom end. I hadn’t forgotten that I felt this way about Hi Res P in comparison with MPS when I had started out with those…so I knew that was an aspect that had to settle in yet. Not only that, but it was only a couple of days into using F120 that I tried switching back to Hi Res P for a couple of moments and found it sounded muddy, sending me right back to F120.
It was at Paladino’s that weekend in the San Fernando Valley that I had my first highlight moments with F120. Actually, it was on the way there. It turns out my processor quite likes my roommates stereo in his truck. The sound of the music was vibrant and alive. The various layers were cutting through rather than sounding muted and muddy as it does in my own car. Apparently I have crappy speakers.
When we got to the valley to pick up our other friend, we switched cars and I found myself in the front passenger seat while we drove to go eat dinner. As we conversed while driving, I listened to my roommate sitting the backseat without turning around to look at him. Normally I’d just tune myself out of a three-way conversation in a car since it was too much work, but I just sat there and chatted away nearly effortlessly.
When we got to the club, the Journey tribute-band “Lights” was playing. As with my roommates truck stereo system, I found that even with the in your face volume of the live band did not degrade going through my processor. I’ve never been a Journey fan and I am not familiar with their material, but I found myself liking the band anyway. It was also the first time I understood the between song banter out of all of the live shows I’ve been to over the years. At one point I started looking up their website when the singer announced the URL to us.
The Pink Floyd tribute band was my first chance to listen to a live band with music that I actually know. The results were very satisfying. I found myself hearing pretty much every note of the guitars cutting cleanly through. With my hearing aid, they would have been muted and indistinct. Before I left the club that night, I knew that live music was back in my life for certain for the first time since I had gotten the implant.
The Paul Stanley concert at the House of Blues the following Tuesday maintained those satisfying results. At the club in the valley, I was close enough to where I could read lips and therefore question my real ability to understand speech in that environment. At the House of Blues it is a much larger venue and I was not close enough to be speech reading. Yet, I understood nearly every word of Paul’s between song banter. The details of the musical performance were very accessible to me. There were songs I didn’t know that well that I wound up getting into as a result of the performances. It had been a year and a half since it was practically thrown in my face that those pleasures were being taken away from me at that very same venue, so the realization that I was there again and I had succeeded in getting that back get me emotional while surrounded by throngs of KISS fans.
The following weekend I went to see the play “Grease” with my neighbors. It was only when we arrived at the Long Beach Performing Arts Center that we realized we had purchased tickets to a High School play, which closer inspection of our tickets that we bought through Ticket Maser revealed. We forced ourselves to go in anyway and give it a chance. We live here so we might as well support our community. Unfortunately, I did not get quite the same positive results at the play as I did the concerts. I barely understood most of the dialogue. However, the performances of the songs, which convinced us to stay, were another story. Some of these kids were good, real good. One of the highlights turned out to be “Freddy My Love,” a song not performed on the movie that is on the album sound track done by Cindy Bullens. In this play, the character of Marty, one of the Pink Ladies, performs it. She actually did a better job than Cindy Bullens and I found myself thinking she was lip-syncing, which she wasn’t. I never even really cared for that particular song, but you couldn’t deny her delivery. The other highlight was Rizzo’s “There Are Worse Things I Could Do,” which had us bordering on tears. The singers carried these performances and I was able to appreciate every detail of it through F120.
Speaking of “Grease,” the original movie soundtrack album was used to gauge F120’s ability to faithfully recreate the original sound. It was that album that I had spent countless hours blasting straight to my ear prior to losing it as a child, so the details are embedded into memory. Collectively, the songs provided a healthy range of instruments with plenty of brass interlaced between the guitars and drums. Only seconds into the opening title track, “Grease” I noticed a subtle sweep of the keyboard keys had become quite prominent and natural sounding compared to Hi Res. Franki Valli’s voice sounded just as I remembered it sounding when it was making the rounds in heavy rotation on the radio back when it was new, not quite being sure whether it was a man or a woman at the time (I thought it was an old woman.) The brass solos in Greased Lightening and Born to Hand Jive were much more natural as well as standing out in the mix. The strings in “Love Is A Many Splendored Thing” were lush and smooth.
Over the month, the real hurdle was getting speech discrimination to where I need it to be. This brings me to stressing a very important point on the eve of roll-out of the Harmony and F120. Those who have gone before me have stressed this point, too and it is an important one for all to remember who go into this strategy. Learning Curve. F120 is an advanced strategy that delivers a lot of detailed information to the hearing nerve as well as an amped up IDR of 96 db. These differences are very evident from the get-go, but deriving the full benefit takes time. It was not until the last week that speech discrimination began to approach their previous best levels that I have experienced since implantation and even then needed more time. Settle in and go the distance. If you go in to get mapped with F120 expecting to be wowed before you even leave the office, you’ll be setting yourself up for disappointment. The strategy needs to be used consistently and persistently before the full benefits are realized.
One indicator of improvement in speech that last week of using F120 was while listening to KFI AM 640 talk radio while driving to and from work. In this environment, I am hearing through the road and car noise to poor radio perception due to a broken antenna. On one particular night, there were several speakers at once going back and forth. I found myself understanding them almost fluidly as opposed to being thrown every time they switched speakers.
When I went back in to AB last week to turn in the Harmony and go back to my PSP, I missed the sound quality of F120 immediately. It was a case of don’t know what you got until it’s gone. Hi Res P sounded dead and hollow by comparison. On the bright side, I have since found out I will only need to go one more month without my Harmony and F120 as it will be waiting for me when I next go into see my own audiologist at HEI just after the holidays. It’s the perfect way to start off 2007.
They say it's over
But I have just begun to fight
It may look hopeless
But that's the moment from what's right
Someday, someway, somewhere
I'm gonna take you there
Where angels dare to fly
Some people wait forever
Some people just run out of time
Some people live in darkness
And give up just before the light
You (you), me (me)
No we won't back down
Let all the others wait
I want someday, someway, right now
Where Angels Dare - Paul Stanley
Wednesday, December 06, 2006
sitting in the R & D department of Advanced Bionics in Sylmar. I'd be
lying if I said I wasn't excited. Participating in Clinical Trials
that have anything to do with bettering hearing in any manner had long
been something I had dreamed of doing since I was a 7 year old kid
willing to do whatever it took to get better again and help the rest
of the world in the process.
Of course, the fact that I was about to be unleashed from the wire
that had been running down from my headpiece above my ear and clipped
in the middle of my neck to my shirt running down my back to a box
perched on my waist by being fit with the new as-yet unreleased
Harmony BTE Processor was enough to sustain the side of me that
couldn't care less about saving the world. Slather on the being
mapped with Fidelity 120 on the new processor as dessert and it was
"Let's ride!!!"
I have to shake my head when I think of the R & D team at AB. I'd
heard the stories before…I guess I just had to experience it for
myself. I am certainly proud to know these are the people behind the
implant in my head. They play a vital part of the process that leads
the technology to us. They are passionate people and it shines both
in the job they perform as well as their personalities.
The bulk of the two consecutive days I spent at AB were with Aniket
Saoji. Aniket is one of the most pleasant people I've ever met in my
life. He takes pride in causing people to heap praise on him for his
mapping skills. I'd heard of his work before in others and here I was
experiencing it.
Aniket started the first day off by fitting me with the Harmony in
Silver outfitted with a T-Mic. He transferred my existing Hi Res P
from my PSP to the Harmony and had me try it out. For some reason,
despite the fact that it was the same program I had been using all
along on my PSP, I was hearing everything as if it were heard through
the blades of a fan. No one else had ever reported this happening
before so apparently I was breaking new ground. Aniket didn't waste
any time calling in troops to solve the problem. Aniket took my Hi
Res P map and used it to build my F120 Strategy on the other slot.
When he had me switch over to F120, the chopped sound effect
disappeared and I was hearing with 120 Channels for the first time.
Hearing with 120 Channels was as if new areas of the auditory cloud
cover had opened up in a haze and the overall light was brighter in a
blinding sense. I found it more difficult to understand speech, being
that my sense of hearing was being fed new information in new locales
within my cochlea. Just as it was when I was initially mapped with Hi
Res P, I had to settle in to it.
Then the work began and again, I would be breaking new ground. I was
the first to participate in a new trial that involved hooking me up to
a Computer running software that would actually switch between
strategies on the fly while I would just listen as they switched on
me. There were a number of kinks that had yet to be worked out that I
was able to bring their attention to.
Lunch took place in the conference room with the rest of the R & D
Staff. At this point, I was using F120, mostly as a way to let it
start settling in. It wasn't as easy to discriminate speech, I also
noticed that I was hearing more..and it's all coming in at once.
However, I managed to have quite an enjoyable conversation with
everyone sitting at the table without exerting too much effort.
Just before leaving the first day, Aniket had tweaked a strategy that
I found immediately pleasing. It had the richer bass of MPS yet the
sharp treble and detail of Hi Res. Having just come off a few weeks
of a turning-stale Hi Res map prior to walking into AB, I was never
more thrilled in my life over a strategy right out of the starting
gate. I immediately made some calls upon leaving the office with my
cell to test it out. After speaking with several different people,
while driving in my car, I had the CD Player going the whole 50 minute
or so trip back home. It seemed logical that I was hearing with F120,
also taking into account that with Hi Res through the Harmony, I was
getting the fan effect. That program was on the other slot and I
wasn't about to go near it.
The next day Aniket revealed what the strategy I was enjoying was…Hi
Res P. He had tweaked my strategy to not only get rid of the
chopped-sound effect, but also made it far more pleasing. I most
certainly did be sure to dole out compliments to him on having done so
well with creating that program.
By the end of the 2nd day, the Harmony was programmed with F120 on
full T-mic on slot 1, F120 with full BTE Mic on Slot 2, and the
aforementioned Hi Res P on full T-mic on Slot 3. I also managed to
get Aniket to switch the Silver Harmony for a Dark Sienna, the color
my own Harmony will be when I receive it. So with that, I left AB to
begin my first foray into using the Harmony BTE and 120 Channels out
in the real world.
Looking for a thrill
You'll get it my way
Let's hit the highway
and I'll take you down
Shoot out in the night
Lookin for action
the main attraction
is back in town
Under The Gun - KISS
Monday, October 09, 2006
The anniversary of my blog was definitely worth pausing to look back on things. In August of 2005, I was quite uncertain of what lay ahead. I was determined to make it happen, but I was afraid that I'd be thrust into a world of alien sound. There were people talking about it sounding like natural hearing, but then there were people who were talking about how mechanical it sounds. Nevermind those complaints about how horrible music sounded. The one theme was that they all seemed to have improved speech discrimination abilities. With my hearing slippping, priorities took over. I was already getting a taste of what it's like to be cut off from the world on that level...having it get worse wasn't an option.
If anyone told me at that time, when I had just picked out the device to be implanted in my ear at the evaluation testing, that I would be where I am at now...I'd have laughed and dismissed them. I had no idea that the phone would be back in my life. Not just part-time either. Those phone minutes on my cell are actually getting used. The relay service has become a museum piece. On the Friday just prior to the weekend of my blog's anniversary on August 21st, I called up the clinic to pay my medical bill using my bank card. She had a spanish accent and I had to correct her on the card number when she repeated it back wrong. It took 2 minutes to do a task by just picking up the phone and calling versus the 10 minutes to call through a relay service and get through the transaction. This was the first cold call I had ever made to someone I've never talked to before in my life. A week later at the mapping, in-booth testing would find my sentence discrimination scores at 80% up from 50% at the previous mapping at the one-month post activation point. The kind of thing I barely allowed myself to dream of a year prior turned reality 3 months into post-activation.
Since then, I have also dealt with my auto-mechanic with their customary call-back after you take the car in and they tell you what's wrong with it and how much. This is something that is rather hellish to do through the relay service since the people in those places tend to hang up before the relay operator can get past "California Relay Service with a call.." Then if you do manage to get them stay on the line to be able to communicate to them, they are generally annoyed and you spend the first minute or two just trying to get them to work with you and get to the real stuff. Apparently those days are behind me now, too. I've found myself in various situations that required I speak with people I've never spoken to before on the phone. For the most part, it goes well. There are people that are difficult to understand without repeats...but there are also those that require very little repeating. That is what encourages me to just keep diving in to the deep end.
Music......there are times I break into fits of laughter while listening to it because I'm in awe of the detail I'm hearing. My fears concerning the possible loss of music as a source of enjoyment post-implant were dismissed early on. Not only can I enjoy the tones of music, but understanding what is being sung without a lyric sheet is creeping in. Then there is the fact that as a fan of listening to live recordings...I'm actually able to enjoy the between-song banter as the performer speaks to the audience. It was always something to skip over.
The 3rd month mapping brought about some changes. I came to the full realization that mappings generally mean that whatever progress I have made with the sound quality improvement will be set back temporarily again. I've read it mentioned by others, but you really do pretty much start over with each mapping since you must readjust each time, but each time it is being tightened up..fine tuned. My comfort levels were increased, while my thresholds stayed the same. MPS, which had been settling in nicely with music, lost it's pleasing sound after the mapping. Hi Res was again proving to be the choice for clearer speech, but neither were really giving me music the way it was just prior to the mapping. Both were resembling how they sounded in the initial weeks post-activation. So that put me off being excited about mappings...for two weeks.
Two weeks later, music started smoothing out..on Hi Res. It wasn't perfect and I had to keep a conservative volume, but it was getting there. It improved as each day passed. I would only switch to MPS briefly, but not be satisfied with how it was processing. Where last time around, I started out with Hi Res, which declined in quality when my volume needs increased and I wound up switching to MPS full time by the time of the 3rd mapping...this time around Hi Res is the only strategy I am using now a month and a half since the mapping. I found the trick was to stick with it rather than switching back and forth. It turns out that MPS is easier to take on and settle into more quickly than Hi Res is. This was influenced by my listening to music since it just sounded more readily pleasing with MPS. I gritted my teeth through less than pleasing bouts of music-listening through Hi Res and have since been rewarded with the kind of results MPS was bringing at it's height and the light continues to break through the haze.
There is also the fact that the FDA requirement for using the 120 channel strategy is 3 months of Hi Res usage. Had I stuck with MPS, that would have been a concern. I imagine my Audie would have allowed me to try 120 regardless...but I wouldn't have gained the patience required for allowing an advanced strategy to settle in. Looks like I'll be ready to go now that the processor and the strategy have been approved for the U.S. and will begin to be rolled out in January of 2007, just 3 or so months away.
It's difficult for me to complain too much about life these days when I realize the blessings bestowed on me. I find myself generally happier and less stressed out by everyday things simply because the world is opening up. The same problems are there, but they just don't cause my smile to crack so easily anymore. Even when I'm down, I have still have the blessing of turning to music to get me through. Ironic.
I can't write a love song the way I feel today
I can't sing no song of hope I got nothing to say
Life is feeling kind of strange since you went away
I sing this song to wherever you are
As my guitar lies bleeding in my arms
My Guitar Lies Bleeding In My Arms - Bon Jovi
Saturday, August 19, 2006
Cut to some 30 or so years later, I have spent hours patrolling Manhattan swinging on a web line through corridors of buildings. I've walked up to the Empire State Building and climbed it all the way to the top to see the city of New York all around me. I've hitched a ride on a helicopter out to Liberty Island and climbed up the Statue. I've whupped purse snatcher's butts and returned the purse to the owner to receive their gratitude. All possible via modern technology through video games. Here's the view from the Empire State Building tower. Yes, that's the Met Life building in the background.

The games of today barely resemble the games of the 1980s when coin arcades were huge. Nor do they have much in common with the home systems like Atari or Nintendo. The games of today weren't even possible then. Besides the obvious graphical improvement..there is the artificial intelligence as well as the sheer size. Entire virtual worlds that are wide open for you to explore are available now. While credit goes to the fantasy game, Zelda: Ocarina of Time, for presenting the first open kingdom, the format was first popularized by the release of Grand Theft Auto III, a game where you took on the life of a con man. The theme was similar to the film "Payback" with Mel Gibson. Night and Day passes. It rains, the sun shines. You can take a walk along the bay just looking at the city across the river or you can carjack whatever car you see that you like. You have various contacts throughout the city. There was no maze to walk through or just moving from point A to point B to finish the game. You had to actually learn your way around the town just as you would any real life town. It is a simulator. You get to live the life of someone else in another world. Playing as Spiderman enables you to get a feel for what any average night is for the wallcrawler as well as just the thrill of having all of his abilities. You actually do have to practice to learn how to webswing which requires figuring out where you want to shoot out a webline to while swinging, throwing your body weight to manipulate your direction and speed. Plus navigating around the buildings without crashing into them. Most people trying it out for the first time tend to give up and just run on the sidewalk.
When Spiderman 2 was released on the XBox, while many games..including the Grand Theft Auto Series are subtitled, this particular game is not. I played it anyway. I just used whatever cues were on the screen. But I missed out on the main story that unfolds as you play as well as all else. I tried playing it a couple of weeks ago to see how much better it would be. I did note it was improved. I understood bits of dialogue during the cut scenes (sequences that tell the main story with no action requiring your controlling the character.) But the random dialogue from people on the street and those calling for my help were still pretty indistinct to me.
I wasn't daunted by that. Considering that you have for the most part all the sounds you'll hear in any major city out on the street, it's a real life audio simulator. There are cars honking along with the rest of the noise they make. People yelling. Birds chirping. Helicopters flying overhead. It's some heavy listening rehab.
After several weeks of not playing at all, I decided to give it another shot. I didn't have my hopes high...I just figured it would be about the same as before considering I am badly in need of a mapping with things just getting too loose and needing to boost the volume considerably. So I jumped on in to the game and started swinging. I was keeping a fairly low altitude going just above the cars on the street and I heard someone shout "Hey it's Spiderman!" Further down the way I heard someone call "Spidey! Help!" to which I responded. They said "The police are chasing after someone, they might need help!" I also heard a crowd that had gathered while I broke up an assault by two thugs on a pedestrian cheer me on.
Obviously, being able to hear all of these little things, as insignificant as they seem, serves to allow for further immersion in the virtual world. Even if subtitles were present, it's difficult to read them and also pay attention to what you are doing as opposed to passively hearing it. These games are only improving in their quality and ability to capture what it is like to be a super hero. A game based on the movie, Superman Returns will be out this fall for the XBox 360. As with Spiderman, it features a completely rendered city of Metropolis that you must protect by flying around as Superman. So not only do I get to be Superman, but I get to visit a fictitious city (and a pretty nice looking one at that.)

Considering that I work nights with not much of a chance to get out and about during the week, it's a great way to practice listening and understanding speech in difficult situations as well as recognizing all kinds of sounds. Obviously, the option to speech read people in these games does not exist (not yet anyway, mouths open and close but do not form words when speaking.) So I am forced to rely solely on my hearing rather than on old habits.So that's how I do my listening rehab.
Spiderman, Spiderman,
Does whatever a spider can
Spins a web, any size,
Catches thieves just like flies
Look Out!
Here comes the Spiderman.
Spiderman TV Theme Song - Unknown
