Try asking difficult questions. I was reading this enthusiastic story about smart contact lenses, and I had one big one. It isn’t answered.
Now, Google’s taken another step in normalizing Glass. It’s unveiled a smart contact lens containing a silicon chip so small it’s the size of a piece of glitter.
The lens is intended to help diabetics track the glucose levels in their tears. It has a sensor embedded in the thin plastic and a wireless chip so that it can communicate with other devices. And engineers at Google’s secretive X labs are working on putting LEDs in the lens so that it can show users a visual warning if their glucose reaches dangerous levels.
Scale that up, and what you get is a version of Google Glass that fits in your eye.
Wait, wait, wait. The lens of your eye is not going to focus on something plastered on the surface of the lens. You aren’t going to be able to put a video screen equivalent on there and have text scroll by, for instance. I can see the specific example they mention working — a pulsing flash, out of focus and seen as a changing level of light, could work as an alert — but you’re not going to be able to scale that up into a heads-up display, for instance.
They have a video demo of a similar system. It’s a contact-lens-sized disc, all right, clamped in place with great big connectors leading into it, flashing a dollar sign under computer control. Yeah, I can see it when your video camera is focused on it from a foot away…but try sticking that directly on the lens and then shoot your video. It will be disappointing.
Miniaturizing circuitry is not news. There’s a problem in optics here that the gushing gadgeteers aren’t at all prepared to even think about.
Kengi says
I remember a system that was being developed a few years ago that was going to use planar Fresnel lenses to manipulate the light coming from each pixel and refocus it to generate a clear image.
http://iopscience.iop.org/0960-1317/21/12/125014
mykroft says
I could imagine the creation of a meta-material with optical properties under electrostatic control, so that it could change the focus of the lens dynamically. Retaining focus on a distant object with an visible overlay of pixels from the contact lens would be quite a trick, however.
Of course, if you throw in alien technology as was used in some Torchwood episodes, the lens could not only overlay text entered remotely, but provide a video feed of what the user sees back to the remote site.
kevinalexander says
More likely it will send specific information to your smartphone which will alert your doctor even if you ignore it.
george gonzalez says
Tech and science reporters are not very savvy, if they were, they’d have a job in the field.
A eye-based glucose monitor is not going to cut it, for so many reasons. You need many milliwatts of radio-frequency power to go just a few centimeters. You need an antenna that’s at least 1/4 the wavelength of the signal. Even if they went to 5GHz, that’s still a good 15 millimeters of antenna. And antennas do not work well in contact with liquids. And a battery that can put out 5 milliwatts for a day is going to be a whole lot larger than a speck of glitter. They could send microwave power to it, but the WHO limits on eyeballs is like 1 milliwatt, any more and you risk provoking cataracts. People worry about their water meter transmitters, they may not be too keen on focusing microwaves into and out of their eyeballs. I wish them luck but the challenges, technical, biomedical, legal, and in marketing are formidable.
kevinalexander says
It could even nag you to regret that extra slice of cake in time to produce a new twist on bulimia.
Pierce R. Butler says
kevinalexander @ # 3 – Most likely, it will send specific information to Google Center, which will use both smartphone and wired lenses to zap you with ads for glucose.
kevinalexander says
George @4
Good points!
You could still get around those problems with a smartphone. Eyeball selfies.
Naked Bunny with a Whip says
I assume by “gushing gadgeteers” you mean the reporters, since there’s no way from this story for you to know what the engineers are doing beyond making lights flash, which is all you’d need for the glucose warning.
fulcrumx says
I wonder why they are even messing with the optics for any of that stuff in the first place. Why don’t they fix up a cap, bandana or ear fob that induces signals directly in the brain? Then, they can just make people think they see or hear or feel whatever marketeers want them to experience and be done with all the other dodads. You’ll never need to type a password again because they’ll already know everything you know.
Naked Bunny with a Whip says
I’m amused that here, of all places, I’m seeing people poo-poo basic research like this just because they don’t see an immediate way to turn it into a viable product.
PZ Myers says
I have no problem with the research. My objection is to the reporter who wants to rewrite the laws of physics so that it will do what he imagines it should do.
ChasCPeterson says
This isn’t “basic research”–it’s product-development research from the get-go.
dianne says
Assuming that the thing works in terms of providing an accurate measurement of glucose, why have it signal the wearer? Why not have it directly signal an insulin pump that can then increase the insulin administration? You would, of course, need continuous or at least frequent feedback to reduce the insulin again when the blood sugar went down. That would be much more physiologic and easier on the wearer.
purkinje4 says
Miniaturizing the computational power to accomplish this is in fact news!
consciousness razor says
What if Google scaled down their customers too, so the contacts would look like a big-screen to them? Are they working on shrinking-rays, by any chance?
Personally, I don’t understand why they don’t sell more Googley-things by saying they’re working on brain implants. If you want to send lots of information into people’s brains, it seems like that would be the way to do it. Sorta like Barney from the Simpsons … but with brraaaains.
Becca Stareyes says
I’d assume the contacts also have an eye-based glucose test. The basic idea of ‘test fluid for glucose, too much (or too little?) makes a light go off’ would be workable — as PZ noted, we don’t need to focus on a warning light (heck, I imagine the distraction would be a feature, not a bug).
consciousness razor says
I remember hearing something about artificial hearts or some such, which already do that… or at least it was being developed at one point. Basically, the doctor could just track what’s happening and make adjustments without having to cut them open again — because cutting people open generally isn’t a good idea. Anyway, the patients wouldn’t have to know anything about it really, unless they are serious tech geeks who have a hankering for another gadget to play with I guess.
marcoli says
I see the problem, but I have a niggling vague idea that a sharp image can be made. Doesn’t a Google glass make an image from about 1/2 inch from your eye but uses reflected optics to make a virtual image seem about a foot from your eye so you can focus on it? If this can be miniaturized further about 10X, I wonder if you can project a sharp virtual image from a contact lens.
Anyway, what I am really waiting for are flying cars. Where are the flying cars?
george gonzalez says
You don’t need a sharp image, a simple red-yellow-green led flash every 10 minutes would be sufficient to tell you if you’re high, okay, or low. Hey, I should patent the concept of RYL for high-yellow-low. And one flash for slightly, two for considerably, and three for OMG. Be right back. Riches await.
The real problems are: (1) Getting enough power in there to run the sensor and the LED. (2) Making it reliable enough so they don’t get sued out of existence when it misreads. (3) Surviving against the lobbyists and the legal wolfhounds of the entrenched near-trillion-dollar industry of selling expensive and proprietary test strips.
Brian Engler says
I didn’t care for this blog entry either. The WaPo had an article, though, that I think did a better job of covering the technology Google actually is working on for this contact lens: http://www.washingtonpost.com/business/technology/googles-smart-contact-lens-what-it-does-and-how-it-works/2014/01/17/96b938ec-7f80-11e3-93c1-0e888170b723_story.html?tid=pm_pop
fishydish says
Why does it matter that it’s on your lens? Certainly you can’t actively focus on something that is on your lens, but what matters is not how it looks at your lens, but how it activates your retina. What is on your lens does not need to be a tiny legible image, but rather whatever image appropriately stimulates your photoreceptors to *seem* like it is in focus. It will remain at a constant distance, so you should be able to create something that works with a ‘neutral’ focus of your eye, or whatever illusory distance is most comfortable (I say illusory distance because I’d expect your brain to *perceive* the image as consistent with your lens’ accommodation).
Mind you, it would be irritating to having something that is floating in front of your vision but is only in focus when your lens is in one configuration. It would be better if they could actually insert the image behind your lens so that it was permanently in focus regardless of what your lens was doing to focus the external world.
stevem says
re @4:
<techie geek alert> But it doesn’t need to supply 5mW continuously for a day, but in mere nanoseconds long burst a few times a day. Also, are they claiming this “smart contact lens” would be completely independent? It seems much more feasible for it to be a peripheral for a “smart watch” or “smart phone”[AndroidTM anyone?]; as such, it could be like an RFID chip, where the power to transmit is derived from the signal it just received asking for an update. Thus no batteries required, just a basic capacitor.
And future development could lead to a miniature version of the “Heads Up Display”, where you don’t look at the screen itself but *through* it (at infinity) and the image on the screen is focused only when your eyes are focused at infinity and not on the screen itself (at a few feet away). And instead of projecting light onto your retina to form the image, it blocks the background, creating shadow dots (defocused; HUD-like) forming the virtual image.
cyberax says
Hey! Using a contact lens to create an image is not impossible in _principle_!
You just need to create image and be able to control the _phase_ of emitted light. We do this for the radars – phased arrays can listen for (or emit) signals by manipulating phases.
george gonzalez says
@22 sure, you can store up power in a tiny flat capacitor over a few minutes and spend the power in say a tenth of a second. It probably takes that long to power up the analog sensor, stabilize, and integrate the reading. The problem is in getting approval to zap the eyeball with the 1 volt per centimeter or so you need to get a volt of DC power. Diabetics are already prone to get cataracts, it’s going to be a huge uphill battle to prove that irradiating the eyeball with that much voltage and power is okay. And the entrenched near-trillion dollar industry could deploy a thousand lobbyists and ten thousand patent lawyers to fight this thing without putting even a small dent in their budget.
Desert Son, OM says
george gonzalez at #4:
Thank you for your comments about the technical limitations relative to the subject article. I wanted to challenge you on one point, however:
I would argue that some may not be very savvy, but it’s uncharitable at best to suggest the entirety of the journalistic workforce are failed scientists and engineers struggling to get “as close to the cocktail party” as they can.
Many journalists genuinely do want to report, in depth and as accurately as possible, on whatever field they hope to cover. That we are regularly inundated with soft sell marketing passed off as journalism does not mean the whole of people who go into journalism are not sincere in either their ambitions or their efforts to understand. It merely means the “entertainment” style of marketing distribution has a louder voice, bigger presence, more visible and pervasive (and pernicious) approach.
Further, journalists rarely publish independently. Even freelancers are submitting articles to an editor somewhere with “real estate” limits and still more editors up the chain worried about sales, audience demographics, advertising revenue, and in many cases the political party affiliation of the board or owners.
In some ways, modern blogging’s ability to devote more depth, analysis, and exposure of topics has given traditional news reporting a run for its money, and rightly so. But it’s still an unfortunate over-generalization to suggest that all science and technology journalists are just bitter that they never joined research academe. I don’t get the impression, for example, that Chris Clarke (who used to write at this network) spends his time wandering the Sonora desert in a storm, beating his breast and rending his garments that he’s not director of research somewhere, and I certainly don’t get the impression that he’s anything less than savvy.
Still learning,
Robert
markmckee says
Sorry PZ but you are wrong. Geordi LaForge had such a device in later Star Trek films.
And why would such a thing be impossible? Of course the human eye cannot focus on something so close but who ever said it had to? This device could simply send images to the retina that the retina would interpret as being 20 or 30 inches in front of the eye.
Just because we cannot fathom such tiny streams of image data does not mean that they are beyond the scope of engineering in decades ahead.
Anisopteran says
Isn’t there a more fundamental problem? How accurately, and how rapidly, does the concentration of glucose in tears track the concentration in the blood?
Enkidum says
I’m totally mystified by PZ’s objection here. As a couple of people have pointed out upthread, you don’t need to focus on the lens. It needs to mimic the light array that would be arriving from something at the appropriate distance, probably about a foot in front of you.
And yes, that’s a difficult technical problem for all sorts of reasons. But I don’t see why it’s inherently unsolvable or contradicts the laws of physics or anything.
footface says
Anisopteran @27 is right: I have a feeling tears are a poor substitute for blood. (When people would test blood sugar in urine (do people still do that), they were basically getting from the past: here’s what your blood sugar was a few hours ago. Not very helpful if you’re interested in treating or preventing insulin reactions.
footface says
Damn it: they were basically getting *news from the past.
george gonzalez says
Okay, there probably are a few good science reporters. But I’m sticking with the opinion that most of them, and all of them that are working for a company or university PR department, are either intentionally or otherwise putting out very misleading rah-rah crapola. Even those putting out press releases from MIT. If they even bother, the caveats are buried down in the last paragraph, suggesting that maybe this huge breakthrough in string theory, nanotechnology, solar power, solar powered cars, 3D printing, 3D IC’s, electric cars, CO2 sequestration, oil from algae shot into a 19-dimensional space, that just maybe it might not ever have a smidgen of a chance of making it out of the science lab after all.
ChasCPeterson says
no need to assume, as it’s stated explicitly. (These have been in development for over a decade, btw.)
More than 90% of diabetics are Type 2, and don’t use/respond to insulin.
Key question. The most recent information I can find quickly suggests that it’s not very good:
2005: Pearson correlation between lacrimal fluid (µM) and blood glucose (mM) concentrations and the proportional change from baseline revealed no significant associations.
2007: We observed significant correlations between fasting blood and tear glucose concentrations (R = 0.50, P = 0.01).
The latter study applied “liquid chromatography (LC) with electrospray ionization mass spectrometry (ESI-MS) to determine glucose in 1-μL tear fluid samples”, which is clearly a long, long way from a simple contact-lens-based surface test, and still got a correlation of only 0.5 (i.e. only 25% of the variance in tear glucose was accounted for by the measured variance in blood glucose).
I don’t have access to this 2007 review.
Caine, Fleur du mal says
#26:
Not this again. Star Trek was not real.
stevem says
re markmckee @26:
<trekkie ALERT> But Geordi was born BLIND (his eyes didn’t work at all; just his eyes, not the optic region of his brain) and the visor was the device that hooked directly into his brain (through his temples). It just sent signals directly to his brain, no focusing of photons onto his retinas through the corneas and lenses etc.
I think G**gle’s device is not even attempting to match Geordi’s visor (too challenging!). ;-p
aziraphale says
I’m a diabetic and also quite squeamish about putting things in my eyes. I think I would prefer a small implant, just under the skin, which would change colour according to blood glucose levels. With fewer constraints on size and power than a contact lens it could also measure other parameters and send them to a phone or smart watch. It’s seldom that glucose levels require an instant response anyway
ChasCPeterson says
aziraphale @#35:
Such “smart tattoos” are also in the works.
PZ Myers says
Some of you are doing the same thing the tech reporter was doing: trivializing a huge problem to leap to an application that Google did not talk about doing. Yeah, you could have a teeny tiny phased array of emitters imbedded in the lens, generating a fourier transformed version of the image you want projected on the retina, with all bits in the proper phase to generate an interference pattern on the retina. Sure. Yeah. Get right on that. The circuitry to do that isn’t going to fit on a lens, at least not yet.
This is not a tiny version of google glass. It is not being promoted by Google as such a thing. The technical obstacles to doing that are huge — it’s like seeing a flashlight and proposing that it’s a death ray in development.
What’s being built is a glucose sensor coupled to a tiny light emitter as a signal. That’s it. If it works (Chas has pointed out some problems with the biology), great. But that does not justify weird and extravagant extrapolations.
dysomniak, darwinian socialist says
But PZ, you’re forgetting the the singularity is going to happen the day after tomorrow and we’ll all have quantum supercomputing nanobots for blood!
Nerull says
The best part of this thread is how many people are willing to lampoon reporters for writing/opining about things they don’t understand, and then turn around and give their (correct, of course, because they wrote it) opinion about something they don’t understand.
jnorris says
That would be an interesting hack.
Caine, Fleur du mal says
Dysomniak:
Heh. I’m currently reading The Whole Death Catalog, and Schechter happily points out, right at the start, that there will be incredible medical advances to stave off aging and death, and they ought to be somewhat available (or possible) around 2160. A fair amount of the planet should be uninhabitable around then, too. I’m sure it will be fun.
chigau (違う) says
Nerull #39
Could you be more specific?
michaelbusch says
@george gonzalez @several points:
You seem to misunderstand the proposed device. The glucose monitor is being pitched as a purely passive system, to be read like any standard RFID tag (change in glucose levels changing the value of the tag). The patient carries a tag reader on their person, which does all necessary processing and presents the value to them (I expect problems guaranteeing that the patient’s glucose level can’t be read by anyone with a tag reader). That is the first version. I personally had assumed these would be designed as disposable contacts, each being used for only a few days.
Their second idea is the miniaturized LED bit, but that was strictly being “investigated”: http://googleblog.blogspot.com/2014/01/introducing-our-smart-contact-lens.html . For that, your criticisms are relevant. As others have said, it doesn’t take much energy if it isn’t being used all the time, but it’s not clear to me how useful that would be as compared to exploiting the existing “my phone is ringing” reflex.
Re. PZ’s original point:
Near-field optics is a thing. It would be possible, in theory, to project an image from a contact lens onto somebody’s retina. But that would not be easy, nor is it what these contacts would do.
Enkidum says
PZ @37
“But that does not justify weird and extravagant extrapolations.”
Well, if the only thing you’re upset about is that the reporter was exaggerating the problem when he said “scale that up” or whatever, then sure, that’s not great reporting. But that’s not all you’ve been saying, you said that this requires “rewriting the laws of physics”, and you presented the problem as being due to us not being able to focus on stuff at the surface of the lens. Which is just the wrong criticism to make.
In terms of the optics, it’s actually not a very difficult problem. It really isn’t. It’s impossible to figure out, given a two-dimensional array of light, what three-dimensional arrangement of objects in the real world created that. But it’s pretty damn easy to figure out, given a three-dimensional arrangement of objects, what their projection onto a two-dimensional surface at a particular location in space ought to be. Of course I’m overstating things a little here and underestimating some issues, but the problem you have explicitly said is insoluble is not only solvable in principle, it’s solvable in practice. Today. Hell, thirty years ago.
In terms of actually getting everything to fit on a contact lens, yeah, that’s obviously not going to be something we’re doing today. Or five years from now. But you present it as this crazy over-reach to suggest that this technology is a step on that road. Which is also just wrong. It is patently obvious that Google and others are specifically interested, in the long term, in seeing if it is feasible to develop Google Glass Contacts. No, this particular contact lens is not a full HUD, and no, they aren’t claiming it is anything like that. But if you don’t think that there are people at Google, likely even the ones working on this specific project, who are trying to think about how to adapt this technology to allow, say, multiple pieces of information to be simultaneously presented, or whatever… then you’re just wrong. OF COURSE they are doing just that.
Maybe it won’t work, maybe there are real hard limits of the engineering here, but this is an entirely appropriate thing to think about in the context of this technology. The reporter is being lazy and overly simplistic, and he could probably ask a lot more useful questions. But you’ve misrepresented the issues in the other direction.
theophontes (恶六六六缓步动物) says
@ PZ
The device does not need to fit on the focussing part of the lens at all. That would interfere with regular vision. Rather, I would imagine: A donut (or rather – old chinese coin) shaped device, that can be powered by induction. Hell, even your cellphone can do this now. The device would use lasers to stimulate the rods and cones in your eyes directly. They could also be used for communication with other devices (watch, cell, toaster,dashboard,…) both to transfer information by relay, or to aid with processing power.
The principle is easy, the difficult part is fitting all those tiny little gnomes into the donut.
Caine, Fleur du mal says
Theophontes:
You can’t go trusting gnomes. They’d eat the donut.
garydargan says
The optical problem could be overcome by wirelessly linking to Google Glass.
chigau (違う) says
If we all just uploaded ourselves, all of our physical problems would vanish!
oh … wait …
theophontes (恶六六六缓步动物) says
@ Caine
No worries, I have already fixed the problem (See: picture).
Resolved. We can go off for tea and donuts now.
bertrandle roy says
You could also imagine tiny lasers targeting the retina directly.
But come on, nobody jumped on the opportunity to call those Googly Eyes?
theophontes (恶六六六缓步动物) says
And of course no one has come up with the obvious: Electronic piercings through the eyelids. To see the digital images, you merely close your eyes. “Wink for an update”, etc. This takes care of all the problems with dilating pupils and the like… and powering the devices.
Holms says
Alternatively, they could just make a small blinking light signal the wearer, which is a much simpler sulotion as simple on/off lights are much less problematic than antennae.
It’s also precisely the solution the engineers specifically mentioned, so, I guess that’s the one they’re going with.
To reiterate the OP: the engineers never mentioned turning these things into a miniature google glass, the engineers never mentioned linking it to google glass, they only mentioned using a light signal. The whole point of this post was that the reporter breezily hand-waved a bunch of physics away.
sonofrojblake says
Elderly but distinguished scientists, a suggestion: look up Clarke’s first law.
Bernard Bumner says
Powering miniaturized electronics, particularly in the eye (buffered, aqueous environment)? I can imagine that might be possible by generating current in situ using various technologies currently in development – I know of various projects to produce modified biocompatible materials which can generate power (e.g. coupling carbon electrodes to enzyme cascades).
Of course, if you could do that, then you could probably simply directly monitor glucose levels using a coupled glucose-oxidase system, like that currently employed for blood-glucose testing. (And you could probably more easily and effectively do so in the body using a chip-based implant.)
This is a fun story, and an interesting technology platform (probably in need of a useful application). However, I’m fairly sure there are better solutions to manage Type I Diabetes.
jim1138 says
Use phased-array optics. A laser would send light through an array of waveguides (i.e. fiber optics) the light would be emitted from a number (maybe thousands, the more the better the spot) of nodes obtaining light from the waveguide. A controllable phaser would delay the phase of light up to one wavelength at each of these nodes. If the phase of all of the emitters align to one spot on the retina, a relatively large amount of light will be directed to that spot. An image could be formed by scanning a pattern on the retina. Of course, you need to correct for the changing focus of the lens and its aberrations as well as drift and distortions of the emitter-contact lens. Also, since the eye scans as the person reads, and also uses tricks to look for edges ( occular microtremors), the image would have to be updated in coordination with the eye’s movement. All of this would need to be taken into account by the 1 microwatt processor…
augustpamplona says
Here’s report about a different tech also choosing to push the we could make contact lenses out of this angle (except in this one they are quoting a researcher so I’m not sure you can blame the press release person):
http://www.ns.umich.edu/new/releases/22042-thermal-vision-graphene-light-detector-first-to-span-infrared-spectrum