DVD by mail isn't dead yet

Discovery of the day: Netflix has made its original series "House of Cards" available on DVD. For a while, Netflix was in the paradoxical situation of offering almost all of the world's video content on disc except for its self-produced shows. NFLX has not been shy about its strategy to shift attention away from DVDs toward streaming, and I was wondering if this emphasis would include omission of its original series from the legacy platform. It looks like streaming will be the preferred service for premieres, with DVD release approximately 5 months after initial availability. This is similar to the standard DVD "window" for other shows and movies, so Netflix is acting like the other content owners. As a DVD subscriber, I'm glad to see that Netflix is not using their content as an exclusive wedge to further disadvantage the older service.

The logical question, though, is whether I will be able to wait 5 months to see season 4 of Arrested Development or will break down and buy a month of streaming just for one show.


Convenience (not savings) is the trojan horse for "smart" appliances

There is an alternate reality somewhere in which customers pay the constantly-changing spot price for electricity, smart appliances adjust load to maximize savings under varying realtime conditions, and legions of economists dance the happiest dance that economists are permitted. In our reality, regulators and customers are profoundly ambivalent about even simple pricing experiments like time of use metering, and the use case for consumer load shifting is mostly limited to electric vehicle charging. While stubbornly flat electricity rate plans will disadvantage the demand for smart appliances for the foreseeable future, at least one barrier fell recently: standardization.

Marvell's SEP 2.0-compliant thermostat

With the release of Smart Energy Profile 2.0, there is now a common language for manufacturers to provide functionality for startup, shutdown, monitoring, standby, and other functions. While this is valuable for utility planners, it's harder to make the case that consumers should care. The more affluent consumers who will be able to afford the first generation of smart appliances aren't going to be swayed by a $10 bill credit. They will, however, enjoy the ability to power up their vacation home from their smartphone as they drive so that it is warm and lit-up upon arrival.

An open standard will enable a rich and interoperable ecosystem of third party applications to talk to any device from any manufacturer. Bluetooth's "remote control profile" (AVRCP) enables users to pair a Bose speaker with an Apple iPod. Similarly, the stalled and heavily-siloed home automation controller market could find new opportunities in coordinating a a Nest thermostat and Phillips Hue lights from one place. Getting manufacturers to implement the standard may take some time, but convenience and lifestyle applications will provide more near-term possibilities than the utility's ability to change your air conditioner setpoints on a hot day.

Is this a way out of the chicken-and-egg logjam of device availability and rate plan introduction? I don't see any others on the horizon.


Electrification, and new vehicle ownership models

If battery technology continues its slow development, will a different ownership model be the accelerator that electric vehicles need to break through? Our approach to personal transport is dominated by two models, with BMW pointing the way to an exciting hybrid approach:

Own 1900-present
  • Lowest total cost of ownership for heavy users
  • Inflexible - one vehicle for all needs
  • Infrequent large inconvenience - user handles maintenance, inspections
Sharing 2000-present
  • Lowest total cost of ownership for infrequent users
  • Flexible - choose vehicle to suit needs
  • Frequent small inconvenience - car may not be available, walk to pickup location
Hybrid ???
  • Ownership advantages for most frequent use case (commuting)
  • Sharing advantages for infrequent use cases (road trip, furniture hauling)
BMW is offering an internal combustion vehicle to electric i3 owners for those occasional trips that will exceed its 100 mile range. It fits somewhere between traditional ownership model and the pure "mobility as a service" model of car sharing, combining the best aspects of both.

EV advocates have been claiming since the 90s that a 120-mile range is enough to handle the typical American commute and errand schedule between recharges. This analysis neglects the fact that we don't buy our cars to handle the 90% most frequent use cases. If that were true, we'd see entire fleets of single-seat cars with no trunk space. At our most rational, we buy our cars to handle 95-99% of the functionality we need so that even single people can drive friends around, take the occasional road trip, and bring home an umlauted cabinet from IKEA. Our desire for functionality can even be aspirational, with few people ever realistically intending to drive their SUV off-road or push their Mustang's cornering and acceleration limits. It's not enough to just handle the commute, and the technical responses (better batteries, on-board charging, battery swapping) have yet to demonstrate practicality. BMW has figured out a clever service-based model that addresses the other 5-10% of the user's mobility needs.

You don't want a drill, you want a hole. You don't want a car, you want mobility.

This hybrid model is an effective one that could carry us for the next 10-15 years. With the advent of driverless cars, I would expect that the pure service-based model will become more popular. Not piloting one's own car reduces personal investment in that particular metal box. If a shared car can drive itself from the remote garaging location, it removes one of the disadvantages of a sharing model. Driverless cars may make personal car ownership seem pointless. Given the surprising development speed of robot cars and the frustrating state of battery technology, does anyone want to take any bets on which one hits 10% penetration first?


My Camera's Last Remaining Advantage is a $1.99 Wrist Strap

Two years ago, I predicted that smartphones would replace dedicated cameras for the majority of consumers. I also had the smug notion that caring about aperture and exposure values would forever leave me with a "proper" camera. All it took was a trip to a damp Northern European island to show how wrong I was.

Lighting conditions in Scotland are challenging at best. You are usually dealing with a cloudy sky that gives you the choice of overexposed light grey overhead or an underexposed shadow of your subject matter. High Dynamic Range photography takes multiple exposures and composites them together with a bilateral filter to make the local gradients appear natural, enabling all the detail at each exposure to stand out. Though I had a choice of cameras for most subject matters, time after time I kept reaching for my phone's HDR camera over my high-end point-and-shoot.

HDR, 5MP phonecam Single-exposure, 10MP PowerShot
Sky has blue, foreground a bit neon but properly exposed Sky overexposed, foreground good

The trend will likely continue. The next versions of my camera (the Canon S95-110) have an in-camera HDR feature, but the camera's weak processor necessitates less-advanced image matching that creates ghosting and other artifacts. The matching and quality of the on-board image processing from my year-old smartphone is strong enough to handle moving objects in the foreground and whatever else I throw at it. The processing gap will only grow over time as camera manufacturers have a hard time justifying 8-core processors and other beasts. The science of optics has been stable for the last hundred years or so, but we're just beginning to experiment with superresolution, adaptive lighting control, and light field rendering. The action is in processing, not optics.

At this point, the most notable advantage that my point-and-shoot has over my phone is a wrist strap; I feel more comfortable dangling my well-secured camera over a canyon edge than my smooth, featureless slab of a phone. Build a retractable iPod Touch-style "loop" into my next phone and I may not bother bringing a dedicated camera along on my next vacation.


Android Beautification Project

It's a staggering triumph of product design when you can get 80% of the functionality correct for most people right out of the box. Only a small number of nitpickers and control freaks are ever going to change a product's settings from the default. The iOS persistent launcher, for example, shows the four most frequently used apps and those apps have a consistent, balanced visual feel. A spot check of my tech-savvy office shows that almost nobody has changed the default.

Android's icon design standards have been thoughtfully criticized elsewhere. Since the OS gives developers the freedom to add an alpha channel, icons can theoretically be almost any size or shape. Even Google's own in-house designs vary from 72 to 94 pixels tall. Android's style guide calls for a "distinct silhouette" and a "slight perspective" that can result in a busy look to my eye: (This was my persistent launcher bar before the Beautification Project.)

The glory of open systems, though, is that if you don't like something you usually have the ability to change it. The "holo" theme for in-app menu bar icons is a marvel of minimalist simplicity, calling for "pictographic, flat... smooth curves or sharp shapes." Nova Launcher (root not required) enables complete replacement of app and folder icons, so I swapped out the launcher icons for their closest action bar equivalents:

Aaaaah much better. This set has only two heights, one radius, and one color. We've gone from the colorful, chaotic riot of a third grader's birthday party to the elegant simplicity of a pinstripe suit. This level of under-the-hood tweakability may only matter to 2% of users, so it's not really a selling point for the mass market. But as one of those 2%, it matters to me.

In all fairness, icon customization is also possible (if involved) on iOS without jailbreaking.


Zombie Technologies Resist Disruption

Sometimes, the situation is not as dynamic as tech strategists would like to believe it is. My entry on inflection points and truly disruptive change struck the customary note of paranoia, cautioning those of us who project a growth rate and assume it will always be thus. Clipper ships, buggy whips, and telegrams are easy examples of technologies eclipsed by change. But we are also surrounded by stubbornly durable products that continue to hang around long past the point that any tech strategist would expect.

Why, for example, do FAX machines still exist? We can send PDFs around by mail, it's easy to sign and return documents completely digitally. Yet, my recent home refinance expected me to conduct the entire transaction by FAX. (I got them to accept an encrypted ZIP file full of PDFs instead.) A product manager in 1995 with a glimpse of today's mobile interconnected world would surely have predicted the death of the FAX machine by now, yet it's still a multibillion dollar (if declining) industry. The production equipment is fully-capitalized, R&D budgets are low, and demand still inexplicably exists. A generation of workers is comfortable with the equipment, and despite the hassles it is "good enough." Office equipment companies will ride this curve down the backside of the product life cycle curve as long as those thin commodity profit margins will sustain the business.

What other forms have persisted surprisingly beyond their sell-by date? Bicycle couriers? In-person equity trading floors? COBOL? The imperial system of measurement?


If current trends continue....

Will mathematical extrapolation destroy the world, harm your children, and give you unsightly skin blemishes? Maybe. One of my favorite blogs posted an insightful warning about the dangers of extrapolation. He notes that any number of advancements will, at a macro-level, follow a predictable exponential change that looks like a straight line on a log plot. These relationships can prove surprisingly stable over a period of decades or even centuries; we've been stubbornly doubling transistor counts every 24 months since 1970, for example.

The danger lies in the inflection points where the rules change and the nice straight lines bend or even reverse. Check out Dr. Murphy's plot of Atlantic crossing times, which demonstrates both errors. Extrapolations based on wind power failed with the introduction of stream, and extrapolations based on steam power broke with the introduction of airplanes. Then extrapolations based on airplanes failed when the Concorde was retired and the laws of physics interfered. (Also, it looks like he's using MATLAB for his graphics!)

In the world of strategy consulting, the CAGR (compound annual growth rate) is our bread and butter. Read any analyst report on an industry, and predictions for the next 5 years will pretty much just be a rate change inferred from the last 2-3 years. Of course we have more sophisticated tools in our bag. Sometimes we'll plot log production cost versus log units of production. Other times we'll look at technology adoption with a logistic function ("s-curve") or even a bass diffusion model. Hedge funds are constantly plumbing obscure branches of physics or math for models that will give them an edge in modeling predictably irrational market signals. But in the end, humans just expect the near future to be not terribly different than the recent past.

True breakthroughs, game-changers, and disruptions happen much less often than most marketing materials would have you believe. Just because our nice linear models can be broken doesn't mean that we should just throw our hands up and declare the world to be unpredictable. But it does mean that we need to ask yourself what will happen when (not if) our extrapolations will fail. What will break the model? Will your collateralized debt obligation explode if housing prices flatline or drop?


5 Rings, N-Screens: Tech Adoption and the 4 Year Olympic Cycle

It's an American semiannual ritual to complain about NBC's coverage of the Olympics and search out a bootleg CBC stream. The time to stop complaining is now, because the Olympic future is here and it rocks. This has long been my wishlist for Olympic broadcasts:

  • Coverage of obscure events
  • No inspirational athlete profiles or interviews
  • No pointless commentary
  • Minimal advertising
  • Available on a mobile device
  • Caching for inconveniently-scheduled events
It's painfully noncommercial. It's not what the average viewer wants. You'd think it would never happen.

But it did. NBC Live Extra hits my wishlist down to the last point. This weekend I watched time-delayed fencing and whitewater kayaking on my Nexus, uninterrupted by anything but the natural breaks between bouts or runs. The experience was glorious.

In the past I have been an Olympic curmudgeon, avoiding the saturation bombing of gymnastics and swimming. The technical infrastructure now exists to draw in an entire population of rejectors who could not previously be bothered with broad-audience scheduling decisions. The process is currently a bit techie-oriented, and most folks are still intimidated by connecting a streaming device to their televisions. Today's landscape is a bit like Netflix streaming in 2007 before Roku, XBox, and AppleTV made it easy to watch movies on the big screen. This is not the time for the brave new world of asynchronous viewer-directed scheduling of the Olympics.

That world is coming, though. TVs have a 7-year replacement cycle. The incremental cost to add "smart TV" capability to a new device is dropping to zero. Much like we recently crossed the featurephone/smartphone 50% mark, it will soon make no sense to buy a linear-only non-streaming TV or external box. The Olympics will eventually be an app on mass-penetration smart TVs that anyone can use and understand.

  • 2008 had a few limited online tools for the techies
  • This year is a strong proof of concept for early adopters and visionaries
  • 2016 will welcome the early majority of enthusiastic pragmatists by making it easy and fun to choose their own programming
  • By 2020, even the late adopters will be sequencing their favorite events (and the innovators will be live-meshing 360 degree wireframes compiled from thousands of crowdsourced cameraphones as freelance commentators compete to provide audio overlay tracks)

How many new viewers will this approach bring in? What will it do to advertising rates? How will it change the content of the mainstream network broadcast? I can't wait to find out. In the meantime, let's all celebrate the availability of wall-to-wall curling from Sochi 2014.


One Corner of the CE Space That Hasn't Converged Yet

My kayaking loadout now includes three distinct pieces of waterproof electronic gear. These provide me with communications, image capture, and navigation. The same functions (and more) are all provided by my smartphone in one handy package, but it doesn't perform nearly as well under salt water immersion. For now, I'll stuff a bunch of dedicated devices into my PFD. So many other bits of electronic ephemera have already vanished into our phones that using 3 specialized devices feels wierd.

+ + <
VHF radio

Check out this guy, for example. Yes, he's an absurdist parody. Consider, however, that today our MONDO 2000 friend could fold his money, video cam, minidisc, scanner, display, microphone, video players, cell phone, voice changer, powerbook, pager, gps, and still camera into a low-end smartphone. And by the way, he wouldn't need the now-defunct wood-pulp magazines either.

One waterproof phone may replace all that single-purpose gear someday so I'll look less like a waterproof version of the 1995 cyberdude. Anyone want to make a case for a phone with a built-in stun gun?


How Far Can (and Should) "Convergence" Go?

As electrification hit the world, we started to see more and more powered home appliances replace their hand-cranked equivalents. Motors were expensive, and a few manufacturers tried to popularize the idea of the modular "home motor" which could be moved from vacuum to blender to as needed. Motor prices quickly dropped, and today my immersion blender/food processor/smoothie mixer/hand blender all contain their own inefficiently-deployed integrated electric drive. The trend lives on today in the dizzying array of KitchenAid accessories that will happily repurpose your blender's motor into an ice cream maker or a bread kneader. But we've basically moved away from the shared motor vision.

It is with this context in mind that I look to the recently-announced padfone as well as more established category players like the atrix lapdock and the asus transformer. As mobile phones become increasingly powerful, the major value-add of a desktop/tablet/notebook/tv is in its greater output (big screen) or input (mouse/keyboard) possibilities. It's a natural geek response to be horrified at the inefficiency of duplicating multiple processors and storage arrays when all you want is a different i/o set.

As one of those geeks, I find the combination-dock concept tremendously appealing. Start with a powerful mobile phone, which accompanies me everywhere except while swimming laps. When I'm on the couch or the subway, slide that same phone into a bigger screen to create a tablet. When I need to work, tack on a keyboard and trackpad. Heck, Ubuntu even wants to let you turn your phone into a full-on linux desktop! Re-use the radio, data plan, processor, memory, and gps instead of paying for the same core components again and again. Voila, efficiency and modularity!

Tidy as the idea is, I wonder if it will have staying power. The shared components are not the major cost drivers of our electronics; the IHS estimate for the iPad 2 shows that the processor, radios, cameras and memory account for only about 1/3 of its bill of materials. (It's mostly the screen and enclosure.) Price is not announced for the upcoming padfone accessories, but the Atrix lapdock costs about the same as the netbook that it subsumes. If consumers move to such modular, configurable devices it won't be to save money.

So what are these convertible accessory-packs good for? Will the future resemble the traveler's briefcase with a phone, laptop, e-reader, and camera? Or my work station with its laptop and K/V/M docking station? I'm not sure, but it will be fun to watch this play out.


Know Thyself, With Data (Will Analytics Save us All?)

While it's tempting to focus on technology and economics, privacy and cybersecurity are probably the major blocking issues in the way of mass smart grid adoption today. These are serious issues and must be addressed, but let's remember that the flip side of privacy is data. From properly-anonymized data, we can progress through analysis, insight, and action. EnergyHub just released a fascinating report of state-by-state winter heating thermostat setpoints. It's easy to explain freezing New Englanders with "flinty reserve" or "Yankee frugality", but the greater savings realizable with a lower setpoint are probably a stronger explanation. (I'm most interested in why neighbors Iowa and Nebraska have a 4 degree differential.)

Fun as this trivial example is, it points to a heretofore nonexistent link in energy management. Any campaign for energy efficiency is going to find it hard to establish metrics and efficacy if the only feedback mechanism is monthly bills. As we move toward a (privacy-respecting, aggregate) view of energy use patterns, we will have the ability to know what works and what doesn't. That's ultimately much more interesting than just knowing that Vermonters own a lot of flannel.


Instant Photo Uploading: Using the Cloud the Right Way

Dropbox has announced an experimental build of their Android client that represents exactly how I want to share photos off my phone. I store all my photos locally on my computer and only share a small subset through flickr. Most phone-based easy sharing systems want to send your photos directly to the cloud. Dropbox's new feature will sync my photos with my dropbox folder, so I can easily move them over to my photo repository. (Or, I suppose, set up a cron job to do it automatically.) Thanks, Dropbox, for honoring my use case.

(Don't have Dropbox yet? Get a sign-up bonus here.)


Large-Format Phone Comparison: Galaxy Nexus (2011) vs Handspring Visor Edge (2001)

When the Galaxy Nexus was first announced, tech wags speculated that its 118mm screen would just be too big and clumsy to handle. As a recent owner I can report that it is large but certainly not unwieldy. In a fit of house cleaning, I dug up my very first smartphone and thought it would be fun to post some pointless back-to-back size comparison shots.

The Samsung is just a bit narrower and shorter than the Handspring, but some of that width is accounted for by the Visor's elegant stylus. The big difference is in usable space: the Nexus is all screen, and a beautiful 720p HD screen with black blacker than blackest night at that. The Visor has a tiny monochrome screen with the rest of its face taken up by hard buttons and the graffiti writing area. (Hey - don't dis graffiti. One of the first things I did with my Nexus was to install a virtual graffiti keyboard.)

Flip them on their side and the contrast becomes more stark. Handspring was founded on the idea of "springboard modules", little hardware accessories that gave you an mp3 player or a camera or any of the other million things that our phones just take for granted these days. The Edge was meant to be the slimmest, sexiest of the Visor line so its springboard modules required an ungainly "shoe" that more than doubled the thickness of the device. The one and only springboard module I ever purchased was the phone add-on, which further had its own battery pack. From the side, this assemblage was a real porker. Still, it was kind of nice to be able to ditch the phone bulk when at the office and still walk around with a svelte little metal PDA.

It was my dream that someday I could have a phone with the form factor of the unadorned Visor Edge. There you go: 10 years later my Nexus is almost exactly the same size as that device. Progress!


The 2.1285e+10 pixels of 2011 (so far)

How do you visualize an entire year of photography in a single graphic? The following is a geeky art project I indulged myself in. It measures every pixel I have taken so far this year, showing the prevalent hues and volume of photos throughout the year. Each bar represents a week of images - there would be 52 if the year were over. The length of the bar represents the number of pixels of photography done in that week. The colors in the bar show the hues captured during that week.

The quantity trends are pretty obvious. The first bump in January is indoor shots from my signature annual party. I took relatively few photos in the spring when I was locked in the thesis cave. Photo quantity grew in the summer as I embarked on weekly kayaking and hiking trips. Peak pixel was September as I took a week to bike through France, capturing hundreds of images along the way.

Hue trends are also present. Winter times have a lot of brown and white. Green appears more as summer approaches. I might be fooling myself but I think I can even see autumn foliage. You might notice the appearance of a red kayak and a red backpack if you look carefully. I had hoped that my brilliant yellow kayak would show up in the summer photos, but the boat's colors get spread out among too many different hues to be noticeable.

How did I make this? MATLAB of course!

  1. Sort a 512-color RGB colormap with RGB2NTSC
  2. Load an image with IMREAD
  3. Reduce JPEG's truecolor space down to a tractable 512 colorspace with RGB2IND
  4. Bin all pixels into groups with IMHIST
  5. Lather, rinse, repeat. Sum all photo histograms, binned by date.
The trickiest part was figuring out how to sort the 3-dimensional RGB color space into a pleasing 1-d continuum. It turns out that I'm not the first to have this problem; the folks over at Visualmotive already investigated this and I agree with their preference for the YIQ colorspace.

In the histogram above, you can see blue peaks for the sky, green for the trees, and orange for the hiker's shirt. Black is prevalent since shadows are dark and some objects (the dog, the shirt) are also black. The most heuristically "wrong" thing about the 512-color reduction is that purple/magenta hues show up more often than you'd expect. There's a bit of lilac in the sky, and the same color often also shows up in wood and stone.

This was a fun project. I like using computation to come up with new ways to understand the world.


No, You Can't Have A (Fully-) Solar Car

Do the Math is a quantitative blog that looks at current issues in a back-of-the envelope fashion. The latest entry is a calculation of exactly what it would take to make a production solar car run. I once delivered much the same calculation to a bunch of undergrads in a policy course during my solar car building days. They were kind of bummed by the numbers.

As far as I can tell, the only real application for on-vehicle solar cells is powering a fan in the car to keep it cool during hot days. This reduces air conditioning load when returning to the vehicle after it has been parked for a while, ultimately saving fuel or battery charge. PV is just too expensive and low-power for anything else. If you want a real solar car, charge your EV from the roof array on your house instead of hauling around a bunch of fragile cells.

Thanks to Ned Gulley for the reference.

Image by the MIT Solar Electric Vehicle Team. I wish they had a good photo of my beloved Manta.


Smart Grid Assists Wind Integration: A Non-Scary Thesis Talk

Want to know how Hawai'i can run its grid more efficiently, harvest more wind, and be more reliable all while not actually using many demand resources? Want a low-jargon to learn about what I have been doing for my thesis, with a promise to use zero equations and only one incidence of the word "stochastic?" Now is your chance, my friends. I will be the featured speaker at SDM's Monday 14 November web seminar. Register here for free and don't forget to throw a few difficult questions my way.


Most Innovation Is Invisible

Neal Stephenson's recent talk on "Innovation Starvation" strikes a nerve in every engineer: we don't build anything anymore. With the end of high-visibility mega-projects like the space shuttle, it's an understandable notion. Another way to look at it is that the innovation of our era is incremental and invisible.

The telcos have invested billions to create a worldwide high-speed mobile data network. The only manifestation of this gigantic project is the occasional poorly-hidden tower disguised as a tree. In exchange, we are never lost, can always meet our friends at an event with no planning, are always informed, record or reference any memory, and can travel in unfamiliar places like a local.

Our electricity system is undergoing a seismic shift away from coal and toward natural gas. For the last decade, 90% of the new generation capacity in ISO New England has been highly-efficient, relatively low-carbon combined cycle gas turbines. If each of these replaced a coal plant, you're talking an avoided-carbon equivalent equivalent to a few hundred wind turbines. Cape Wind is a big, visible project with a high feel-good factor. But the invisible innovations in natural gas exploration that have made this cleaner fuel relatively cheap have had much more impact.

Even the military (which used to spontaneously generate battleships and bombers like aristotelian flies) is assembling its toys from loose networks of small parts. The drone that just executed Anwar al-Awlaki is a fragile model airplane connected to a bunch of satellites, a guy in a trailer in Nevada, and world-class intelligence gathering.

To a generation that grew up on glossy books showing us the Future in its flying-car glory, this is all unsatisfying stuff. Sure, practical supersonic transport would cut my flight duration to Europe by a few hours. But the ability to rent a bike in Boston and return it in Cambridge saves more time per year. There is plenty of innovation happening, Neal. You just need to look with different eyes.


Special Project: NREL

I've been waiting for a while to announce this one. Last fall I applied for an Innovative Research Analysis Award Program grant from the National Renewable Energy Lab. It took a while to get the paperwork squared away, but the award is now official. From the NREL website:

Power System Balancing with High Renewable Penetration: the Potential of Demand Response in Hawai'i

The State of Hawai'i has adopted an aggressive renewable portfolio standard of 40% renewable energy by 2030. Most system balance studies in Hawai'i have focused on grid assets such as spinning reserve or energy storage to provide electricity when generation from renewable resources changes unexpectedly. Demand Response (DR) is an alternate strategy in which the grid operator ensures system stability by managing select consumers' loads, such as changing air conditioner set points or turning off non-essential loads within the service area according to a pre-approved prioritization plan. Demand Response may provide a lower-cost solution to balancing intermittent supplies, enabling the State to achieve its goals for reduced energy dependence. This research will use time series data for demand, wind speed, and wind speed forecasts to identify the potential grid-value of Demand Response, as well as DR program design to meet the needs of both the electric utility and electricity customers. A unit commitment model will simulate the relative production of wind, thermal, and demand response resources, then predict the frequency, duration, and scope of curtailment events necessary to maintain a balanced grid. Lessons learned in Hawai'i can be applied in other regions.

Collaborators: Massachusetts Institute of Technology, National Renewable Energy Laboratory

Estimated completion is September, with publications and conferences to follow. This project is the reason why I delayed my graduation from Summer to Fall. It's exciting to see it come together.


Meta: Plus

I now have a g+ identity. Use my profile to add me to your circles and I'll return the favor. Postings there (and still on facebook) will mostly be mirrors of what you see here.


It's Your (Grid) Weather Forecast

I'm a bit of a grid geek. We are having a hot day here in New England and my facebook/google+ stream is drowning in people talking about 100+ degree temperatures. That's the obvious result of the weather. The less obvious result is that electricity demand is currently surging as everyone cranks the air conditioning to stay comfortable. Almost every generation asset in the region is probably running near maximum right now. Since taking Ignacio's grid regulation class, I have recreationally checked the marginal price map from ISO New England. Today is the worst I have ever seen it. LMPs are in the $200/MWh range right now. During last week's mild summer temperatures the region was about $30/MWh.

What does this mean behind the scenes? Suppliers of base load power are cleaning up right now. If you run a cheap coal plant, you get paid the same as the near-decommissioned fuel oil plant that they crash fired yesterday. If they didn't have long-term power supply hedges, utilities would be losing money like crazy. As a residential customer, NSTAR charges me 7.7 cents/kWh to buy electricity and they could be paying 20 cents right now. Our independent system operator is no doubt going crazy to make sure that all the reliability constraints are being met. EnerNOC is probably calling industrial consumers all over the region and asking them to curtail their electrical load. This is all heroic effort to make sure that we all stay comfortable and cool on an otherwise ugly day.

What does this mean for me, the consumer? Very little. I pay the same rate regardless of heroic effort. Who cares about power balancing, marginal prices, or what the generators had to do in order to get me the electricity? Turn it up and let it rip!

This is an insane way to run a market. Would you act any differently if you knew that your electricity cost was going to be 4x higher today? Personally, I'd be swimming in a lake.