Who Says There's No Alternative?…

Share

The status quo no longer cuts it…

Over the past several decades, our national dependence on imported oil has always been problematic, and it’s starting to become a crisis. One recent issue has been the sharp increase in crude oil prices:

oil-prices.jpg

Part of this price spike is being driven by speculation rather than actual market supply and demand, but the trend here is not our friend. Prices will continue to move upward as global demand increases – especially from Asia. An oil dependent future will not be a bright one for us. And price is just one factor in a complex set of energy policy issues that will continue to challenge us.

We need to change the course we’re on. And we need to do it soon…

When the (then) Soviet Union launched Sputnik I in 1957, it set off alarm bells in this country that we were falling behind technologically, and that our security might be at risk. NASA was formed shortly thereafter, and in 1961, President John F. Kennedy called for America to land a man on the moon and return him safely to Earth by the end of that decade. The technology to achieve that goal didn’t exist at that point, but the will to make it happen was there. It set in motion an impressive period of innovation, where many ideas and approaches that had been floating around were explored and evaluated. And the best of them were funded and developed. There were setbacks and failures on the way, but by July of 1969, a man was walking on the lunar surface.

It’s time for us to challenge ourselves again – this time for energy independence…

We need to feel the same sense of urgency that we felt with Sputnik in the late 1950′s. There are alarms going off on many fronts, and we need to start paying attention. For multiple reasons, making a significant change here needs to become a national imperative.

One is national security. The United States has around 2.5% of the proven oil reserves in the world, and we consume about 25% of all the oil produced globally. The harsh reality is, about 2/3 of the oil we import comes from countries that are either politically unstable or openly unfriendly to us. The money we export for this oil allows many of these countries to work against our national interests globally, and potentially fund activities targeted at attacking us on the home front. Our lack of self-sufficiency in this area also put us in the position of potentially compromising our national values in order to preserve the flow of oil we depend on. In a sense, we have become hostage to our own thirst for oil. This should make every one of us uncomfortable.

Our economic strength is also a consideration. It’s pretty much a given that fuel costs will continue to rise. Global demand for oil is exploding as the economies of Asia expand, and we have no other option now but to pay market rates for it. Unfortunately, rising costs for oil impact every aspect of our economy. Production and logistic costs will continue to go up, and they get passed along as higher prices for goods and services. On top of that, people will have less money to spend after dealing with rising heating and gasoline prices. Combined, these factors could seriously impact the vitality of the U.S. economy and the stability of our financial markets. And that economic and market uncertainty can ripple through to things like employment rates, health and retirement benefits, and the tax base we depend on. These are shifts that will be felt by everyone, but they will have a disproportionate impact on those without the financial means to make adjustments. Ultimately, it will be hard to maintain our position of global economic leadership – a critical driver of our current standard of living – without having a stable domestic economy to back it up.

Another reason for rethinking our approach to energy policy is our need to preserve the environment. The carbon released by the burning of fossil fuels is likely having a measurable impact on climate patterns globally. Any significant changes to our global climate could have profound implications. A shift away from oil as our primary based for energy generation would be a long overdue investment in our planet’s future.

Without a doubt, none of these are easy challenges to deal with…

To make a meaningful, sustainable difference, we have move past the ‘quick fix’ mentality here. This means we have to stop focusing on solutions that simply subsidize the status quo, and instead craft policy that creates incentives for people – and the marketplace – to make correct long term decisions. It requires a radical investment in change.

There are two paths we need to go down…

The first is based on the formation of more effective policies and actions. Though we need to encourage voluntary conservation, it will not be sufficient to get the job done. We need to inject conservation into the formation of public policy. It needs to become part of the structural framework everyone operates under. We should establish efficiency standards that businesses and products must achieve, and create both the economic incentives and disincentives the marketplace needs to realize them. But public policy also needs to be sympathetic to the lives of the people it effects. We need to avoid sponsoring programs that may sound good in theory but few people actually use. They end up being ‘feel good’ measures that simply give the appearance of conservation while squandering financial resources without any measurable benefit to show for it.

To be effective, public policy ultimately needs to be practical…

blog-gaspump-sm.jpgThe second path we need to follow is one of fundamental technical innovation. This needs to go beyond the incremental innovations typically driven by market forces. I envision this as the formation of a ‘NASA’ for energy technology (actually, more like the 1960′s version of NASA). It should be a magnet for the best, brightest and most innovative in the energy field, giving them the resources and opportunity to do groundbreaking work they wouldn’t have a chance to do anywhere else.

While this agency would need to have a clear mission and be accountable for how it spends resources, it shouldn’t be run like a business venture. Fundamental innovation doesn’t happen that way. It will need to be headed by someone respected by other scientists and technologists – not a bean counter or politician. It should shun bureaucracy, and avoid any corporate involvement or sponsorship. Within reason, whatever IP it produces should be licensed using an ‘open source’ style approach, streamlining adoption and encouraging the formation of a new breed of energy related industries. (Industries that will ultimately supplant our existing energy industries.)

Some of the innovations developed from this path might also require significant structural investments to make their adoption practical (e.g. – we needed a national network of gas stations to make the car a viable medium of travel). This would need to be done through a well considered combination of public policy, market incentives, and direct government involvement.

Nurturing these technical ecosystems will be important to their success…

If we walk around believing that everything is fine in this space, we are simply deluding ourselves. We are under the gun to change- both literally and figuratively. That said, there are plenty of special interests lined up behind the status quo, and they have plenty of money to spend to keep things just the way they are. They will spin the media and aggressively lobby congress in an effort to preserve their ‘business as usual’ approach.

Change on this scale will not come easy, and will require the backing and commitment of people at the highest levels of both business and government. It will need to spring from a non-partisan recognition of the urgency of our current situation, and a passionate belief in the great things we can achieve as a nation when we have the focus and will to do it.

And most importantly, it will require all of us to stand up and demand nothing less…

Linux For The Masses?…

Share

Sometimes less can be more…

And that’s what the makers of ‘gOS’ are counting on. gOS is an Ubuntu 7.1 (Linux) variant that is packaged with a collection of Google’s web based services (including Google Office), the Firefox browser, and a few other web centric applications. The desktop, reminiscent of Mac OS X, offers simplified access to all of these services:

bloggos-image.jpg

This OS will be offered preinstalled on a system that will be sold by none other than Walmart. Priced at $199, it offers the average person all of the core services they will likely need on a computer except for one.

Internet access…

That said, I believe there are ways to provide for that as well. Now that we are seeing PC’s dropping below the $200 price point, it might make sense for the market to start thinking of them the way wireless carriers think of phone handsets. There will always be a ‘high end’ market just like there is with handsets. But there is also probably a market for a service that bundles a system like this with internet connectivity for a basic ‘cell phone like’ plan and commitment. Keep it for a two year contract, then trade it in for a new one when you renew. It could also be subsidized by advertising, something Google and others are looking to bring to the cell phone market.

With a combination of packaging, ad/sponsorship subsidies, and Moore’s Law, it might even be possible over time to deliver access to the internet at a price point almost everyone can afford. It’s clear we need to begin to close the ‘digital divide’ that limits access and opportunity for many of the poor in this country.

This might be a way good start…

The Tectonic Forces Shaping The Web…

Share

There’s been a lot of recent posting about ‘Web 2.0′, ‘Web 3.0′, and beyond…

webmonolith.jpgThe biggest fallacy in all of this web versioning mania is that it treats the web as if it were some vast, singular gestalt. The reality is that there are multiple threads of evolution taking place concurrently in this space. Each of these threads is developing at its own independent pace. It’s the assembly of elements from various points along these threads that ultimately emerge as “user experiences” – not an easily versioned Web ‘singularity’.

I believe the evolution of the web overall is best served by discussing the various domains that help make it up. These domains are like the tectonic plates that make up our globe, appearing as a single whole, yet moving independently and sometimes in conflict with each other. I believe that there are five of these domains that are shaping what we see happening in the web space today. They are each at various levels of maturity, and each faces distinct challenges in their continued evolution:

1. The Interface Domain

Most people experience the internet through a browser. The browser model was designed to to let people easily navigate through the early web – little more than a globally distributed hypertext deck. It was a basic environment that was light on media and heavily textual. HTML – the web page format interpreted by the browser – became the basic descriptor for site implementation. It has since extended its capabilities with CSS and now AJAX to become a more dynamic user application environment. Adobe’s Flash – a web media technology – has also evolved into a significant component of the modern web experience and is used for almost all video on the web.

While today’s web experience is vastly different from the initial ‘Netscape 1.0′ experience of the mid-1990′s, the overall web access model hasn’t really changed substantially. It is still driven by search, bookmarks, and links. It still has a Home Page and Back/Forward navigation. It’s also something that you still need to ‘visit’ (using some browser type of tool). It isn’t yet embedded in software and devices in a way that makes it both ubiquitous and invisible. I believe current web interface models are still in their early days, and think they still have a long way to evolve.

2. The Social Domain

Social networking sites like Facebook, MySpace, and LinkedIn – the best known expressions of the social web – are based on the concept of personal identity. They depend on people ‘advertising’ who they are, and providing sufficient personal details (in a structured format) so that other people can discover them. The goal for sites like these is to make – and then leverage – connections between specific individuals.

But there’s much more to the social evolution of the web then just that.

Another side – made popular by del.icio.us – is social tagging. Social taging allows people to classify web sites they visit using text tags, and leverage what I call reputational identity. People have identity, but it’s only there to identify them as the creator of their tag set. People can look for tags created by other specific people, but they don’t get to see any details about them.

Yet another aspect of the social domain is the assignment of relevance in search. This area uses putative identity to define relevance. It isn’t dependent on having hard individual identity of the authors of specific web pages. Rather, it derives the authority of these authors through indirect means like the preponderance of links to their content from other ‘highly relevant’ sites. This network derived ‘authority’ becomes an authors identity in this domain. They have no direct role in defining it, yet it’s probably the most significant factor in the social discovery of what they publish.

There are also social technologies like Bittorrent that depend on technical identity – they use vast networks of individuals to create efficiencies around technical actions (like downloading a file), and only require sufficient identity to establish connections and manage resources. There is no personal identity involved in this.

I think the biggest issue that needs to be address for the social web to really move forward is the evolution of a model for verifiable personal identity. The vast potential this thread of development holds will remained shackled without a global framework for trust, reputation, and identity assertion and management. I believe the social domain has probably plateaued waiting for this to happen. Once it does, I expect to see this take off in many different directions.

3. The Connectivity Domain

Connectivity is all about the throughput and ubiquity of links to the internet. At a macro level, connectivity has been evolving at a pretty good clip over the last decade. The number of individual households connected to the net has grown steadily, and the bandwidth of those connections has continued to climb as well. That said, connectivity growth has been uneven globally, with certain parts of the world virtually invisible on the net (less than 4% of Africans are online!), while other regions (like parts of Asia) are aggressively expanding.

Unlike all of the other aspects of the evolving web, connectivity isn’t virtual – it’s physical. It involves hard costs and regional priorities. It runs up against land use rights. It needs to work around entrench interests looking to derail projects they feel threaten them. It crosses townships, jurisdictions, and borders – and that means it involves politics. And that means its messy and difficult to predict.

If you look at the battles that have taken place between individual cities looking to install municipal wifi, and the big telecom providers like Verizon and AT&T, it’s clear the battle lines are forming between the old and new guard here. The adoption of new technologies like WiMax scares the hell out of most cellular carriers because it can bring VoIP to the mobile market – and kill their current business models. (I’ll need to see how open Sprint will be with their WiMax implementation) You have China trying to filter out services like Skype to prop up their state run phone service monopolies. And the debate of network neutrality in this country is starting to shape up in a similar way. No one’s giving up without a fight.

Unlike other aspects of the evolving web, technology isn’t going to be the gating factor in the evolution of connectivity. It will be a political battle that, unfortunately, the old guard is better positioned to wage right now than the new guard. That said, Google is a wild card here, especially in the upcoming auction of spectrum. They have the resources to really shake up some of the entrenched interests, and the vision to move things forward. But any success they have would only impact the US, and it would do little to improve services to the most pervasive web platform globally – the mobile phone.

I believe connectivity will continue to mature in fits and starts. Openness and capacity will be distributed unevenly – not just globally but within this country as well. Progress in many areas will come reluctantly, and at the minimum level needed to keep the political interests -and hence, regulation – at bay.

4. The Content Domain

Content is exploding on the web. The evolution from passivity to participation is moving at a good clip and accelerating. This is being fueled by a combination of increasing bandwidth, easier to use, free publishing tools, and a shift in the cultural zeitgeist that now finds value and satisfaction in this form of self-expression. Media of all kinds is being produced and distributed via the net, bypassing the traditional gatekeepers who are powerless to stop it. This has allowed a viable micro-publishing model to emerge, where content is created and packaged by small groups to serve the interests of increasingly small market demographics.

The content thread is probably the most evolved of all the threads that make up the net, but it still faces some significant challenges. Censorship and disinformation are very real threats (just look at what happened in Burma – content still travels through physical wires and boxes, and these can all be controlled.) The lack of global consensus on a ‘Web Bill Of Rights’ leaves us in a situation where individual countries are attempting to apply their local laws to the global web, raising real issues around freedom in a trans-national space. On the business end, effective models for commercializing content have yet to emerge, which combined with piracy, is slowing the creation ‘professionally’ generated content. Cultural balkanization is also a concern. Non-English content is continuing to grow in volume and importance, but a technological foundation that can effectively handle discovery and translation doesn’t exist yet. And it may be a long, long way off.

Despite these challenges, I feel good about where content is at right now.

5. The Organizational Domain

The organizational domain is the one that seems to get everyone pumped up. It’s all about how information on the web gets classified and organized. Using current search tools, it can be very difficult to find many of the less common things you might look for on the web. The signal to noise ratio is very low, and finding specific details can require a great deal of effort.

That said, almost every recent discussion on the limitations of current web search technology also ends up talking about the “Semantic Web” and how it will help straighten this situation out. It has been touted as the ‘next evolution’ of the web.

The thought of a fully tagged web is a compelling one. Easily find just what you need. Make comparisons without visiting dozens of sites. Have the ability to create mash-ups out of virtually anything. It seems like the answer to all of the issues we struggle with today. Unfortunately, I think there are some very bright people that are simply glossing over the practical aspects of moving this academic concept into the real world. Implementing the promise of the semantic web will require reaching a global consensus on how things should be tagged, and then having everyone do that tagging themselves when they create new content. I see little chance of either of those things happening in a meaningful way.

People are generally lazy when it comes to things like tagging, and doing it correctly takes both time and effort. Most people will do the minimum they need to do here (which may end up being nothing). Without having a through job done on the tagging everywhere on the web, people that are looking for information will still need to use traditional search methodologies to find things. If they simply count on every site having precise and complete tags, they’ll risk missing out on lots of valuable content.

I also have no doubt that ‘semantic spam’ will emerge, distorting search results deliberately, or being included in results by taking liberty with how much they respect the intent of the query. Over-tagging content to raise its visibility has always been a problem in the professional content space, and I see it being even more problematic if implemented across the entire web.

On the commercial side, I don’t anticipate a rapid adoption of the ‘Semantic Web’ either. If I were a retail business, I would be reluctant to disclose detailed information about my inventory levels or prices, especially if my competitors could look at it just as easily as my potential customers. And that type of disclosure might not capture the key aspects of my value to the market. I may do specialized in-home installations, or provide unique types of training or consulting, or have pre-confgured bundles of goods that better serve my target markets. It’s possible that none of that could be expressed in a meaningful way using a fixed schema. And I wouldn’t want to find myself dismissed out of hand for not falling into the top three “best” stores based solely on a single unbundled price. If I thought there was a potentially meaningful downside, I’d simply avoid it.

Building the foundation beneath the Semantic Web is also a huge undertaking. Having worked on multiple industry standards bodies, I believe that reaching agreement on the broad set of schema needed to make the semantic web really work could prove elusive. Various parties in the marketplace will be advantaged or disadvantaged based on what’s in or out of a particular schema (a taxonomy + enumerations). In defining a schema, you essentially define the question you want people to ask to discover you. Everyone will want the schema to ask the question they know they will have the best answer for. As a practical matter, a schema that everyone in a particular discipline can agree on will either be way to complex or way too simple to be a meaningful tool for discoverability.

I also think that there is an ontological problem with the Semantic Web. Classification depends on definition, and how a definition is applied is often a matter of perspective. What makes someone a ‘discount supplier’ or a ‘full service dealer’? What makes an item ‘rare’? This conceptual definition of a space is called an ontology. It’s not about taxonomies or enumerations, but meanings.

Reaching agreements on the meaning of specific terms – even within a single language – can be a challenge. Having to deal with multiple languages and cultural references complicates it even more. Most legal contracts devote pages to defining a few significant terms, and base those definitions on the precedents established by courts in related litigations. Think of all of the unique terms (enumerations) that will need to exist within all of the different taxonomies that will end up being created for this effort. They will all need to be exactly defined. Even if it ends up being possible to get everyone to agree on the various taxonomies and their enumerations, I don’t believe it is practical to achieve concordance on rationalized ontologies.

I’m sure there are many people with a differing view on on this, but far as the evolution of the organizational domain goes, I would not look to the Semantic Web for a solution. I believe it will come down to how the search engine space evolves. I think we will continue to see a refinement in the way the main search engines index the web and integrate more social cues into defining result relevance. I also see the emergence of a ‘long tail’ in search – smaller, more vertically focused search tools that addressing specific market segments in a very deep way. In general, I expect all search engines will focus more on delivering ‘goal driven responses’ that return a range of potentially useful content related to predefined common activities. There is a lot that can still be achieved using this approach.

So where does this leave us?…

While it may be beneficial from a marketing or fund raising perspective to hang a web version number onto a particular type of technology or service, it really distorts what is happening in this space. I see many people developing for the web that are consumed by an almost sophomoric enthusiasm to rush ahead to the next sexy thing. Unfortunately, unless we find ways to solve some of the tough foundational challenges that exist right now in the various domains I outlined, the web will never reach its real potential. By flipping through “web versions”, we’re only creating the illusion of crossing major milestones. All these issues will still need to be addressed at some point.

And they won’t be any easier to solve by simply jumping to “Web 4.0″…

Barbarians At The Gate(Keepers)…

Share

It’s said the Internet can route around any network problems it encounters…

While that claim may be debated on its technical merits at a network infrastructure level, the internet has proven very adept at routing around the bottlenecks and inefficiencies we find in the networks in our society. Information of all kinds finds it way around traditional gatekeepers and directly to the hands of individuals. In this new model, everyone can be both a consumer and a publisher. The traditional barriers – and the costs associated with them – are being swept away.

And its disruptive effects continue to transform our society…

Centralized aggregation – a middleman collecting, repackaging, and redistributing content – has become marginalized by increasingly more sophisticated search technologies that allow it to happen in a personalized way right at the edge of the network. Bits can move around the globe at the speed of light, regardless of what those bits represent. Any limitations their physical counterparts may have are absent here, and the powers that control distribution and access to them hold little sway on the web. As more things become digital, this fundamental characteristic of the internet continues to grow in significance.

And things are starting to change…

Radiohead, one of hottest rock bands in the world today, has decided to release their latest album directly themselves – without using any record label or even a reseller like iTunes. And if that wasn’t enough, they also decided to let people download the album – “In Rainbows” – for whatever price they want to pay. They let you enter the price you are willing to pay when you check out.

So are they crazy?…

In a word, no. The reason something like this can work is that top end recording acts today get only a 30% cut of record sales from the labels. They actually make their real money by touring – and that’s a part of the music business that is doing quite well. For a band like Radiohead, this is actually a brilliant move. They will probably end up making more money from record sales (since they keep it all), and will end up broadening their fan base by make their music so accessible. And a broader fan base will help power ticket sales at concerts – the real place where the money is anyway.

radioheadnyc.jpg
(photo by basietrane)

This will clearly upset the balance of power in an already shaky recording industry struggling to regain relevance in a post-internet, post-napster world. The record industry depends on having these popular acts both as revenue producers for their bottom line today, and for the residual value they can bring through catalogue sales in the future. The economic viability of the industry will be challenged if more popular bands follow Radiohead’s lead and go independent.

In addition, Radiohead’s decision to let people set their own price could end up establishing a precedent in the market that even ‘white knight’ outsiders like Apple’s iTunes could find difficult to compete with. After all, if some of the top bands in the world were to sell directly and let people pay whatever they wanted (or even a significantly lower price than the market overall), why would people be willing to pay $.99 for single tracks of less popular bands.

The impact of this isn’t limited to the recording industry. Publishers, cable operators, movie studios, and information services are all in the same position. Any business that operates as a distributor, aggregator, or gatekeeper should be worried. They are quickly becoming a commodity, and need to find new ways to add value. A tipping point is coming, and there’ll be no turning back.

In fact, it may already be here…

Imagine The Possibilities…

Share

I was sent a link to this video and wanted to share it…

While conducting an unrelated experiment, American inventor, John Kanzius, discovered that salt water, when excited by radio waves, becomes combustible. The chemistry of the experiment apparently results in a release of hydrogen from the salt water, which then burns:

The next step to understanding the commercial value of this discovery is to verify that the output can release significantly more energy than is required to catalyze the reaction. If it does, then the potential this discovery offers us is incredible.

A virtually unlimited, inexpensive, environmentally safe energy source…

Think of how that could transform the world both economically and politically. Think of the human potential that could be realized globally as new opportunities manifest.

Wow…

I realize that what we see in the video is just one tentative step in this this direction, and may not pan out as being a viable fuel source at all. It’s just liberating to think of how different our society – on a global level – will look when an energy source like this finally does arrive.

Let’s hope this moves us closer…

The Changing Media Landscape…

Share

Sun CEO Jonathan Schwartz has an interesting video on his blog of an interview he did earlier this summer with Pat Mitchell. It explores how traditional media companies need to come to grips with the reality of new media, the challenge of monitization, and dealing with piracy:

I found it to be a refreshingly honest discussion by someone that seems to understand the reality of whats happening on the ground when it comes to media. During an exchange near the end of the video, Jonathan Schwartz actually tells the general counsel of Viacom that he is ‘deluding himself’ for thinking that pulling their content from YouTube was a positive thing for the company.

That type of frankness isn’t something you often hear from any CEO…

There is a rather long intro at the beginning of the video that you can just skip over, but I think you will find the actual interview worthwhile.

FCC: A Conflict Of Interest…

Share

With the sunsetting of analog television, the FCC intends to auction off the spectrum that it once occupied. All of the usual players are lining up to participate in the auction, but there are a couple of new faces in line that have the power to shake things up a bit.

Google and Frontline Wireless

While most everyone has heard of Google, Frontline is probably not that well known. Here is how they describe themselves:

Frontline Wireless envisions a 4G wireless broadband network that will make advanced Internet services as ubiquitous as the air we breathe. By leveraging efficiencies of shared spectrum and network infrastructure, Frontline will empower first responders with state-of-the-art technology and liberate consumers from the “walled gardens” of the incumbent wireless providers.

Frontline Wireless was founded by a collection of telco industry heavy-weights, including former FCC chairman Reed Hundt, and backed by serious money folks like Kleiner Perkins’ John Doerr and Jim Barksdale of Barksdale Management Corporation (and former CEO of Netscape). They are folks that understand this space cold and should be taken very seriously.

Though there are some differences, both Google and Frontline are looking for the FCC to establish principles of openness as a part of this auction by requiring bidders to allow any device to connect to the network, any applications to live on the network, any party to access bandwidth on a non-discriminatory basis, and any internet provider to connect to the network.

They want the principles of Network Neutrality applied to wireless communications and services…

fcc-kjm.jpgWhile the FCC has paid lip service to requiring this type of openness, chairman Kevin Martin hasn’t shown any willingness to lock it in to the auction terms in any way that couldn’t be easily side stepped by the big telcos. While he’s agreed to add requirements for open devices and open applications, these are meaningless gestures without requiring non-discriminatory access to bandwidth and a choice of ISP’s and other service providers. As it stands right now, the wireless broadband landscape won’t look much different than it does today once this auction is over. It will still be the same players offering the same walled garden approach to providing data, content, and services. Everyone knows that lock-in’s are all about who controls the flow of bits – and nothing in the terms of this auction will change the status quo.

The consumer will still be captive to essentially government anointed monopolies…

I would be hard pressed to find anyone that considers the limited choices, termination fees, and service restrictions imposed by the existing wireless carriers to be serving the public interest. But somewhere along the line, that seems to have stopped mattering. The FCC ‘s priority in this auction is more about maximizing the revenue that can be raised than anything dealing with the public good. They have become conflicted by congress’ desire to realize a revenue windfall.

Hopefully its not too late…

I believe the FCC should be focused on creating the same conditions in the wireless space that have turned the wired internet into the backbone of our global economy. While there are clearly differences between the two, there is a lot of creativity in the marketplace, and some very interesting thinking around both policy and technology initiatives that could help maximize the bandwidth available in the wireless spectrum. This is innovation that will not just serve the common weal, but also enrich the coffers of government by fostering the growth of a new commercial wireless ecosystem. The one real downside to this approach is that it doesn’t offer the big near term revenue win that our short sighted political establishment is looking for.

Our transition from analog to digital television transmissions is offering us a unique opportunity to transform the wireless landscape. It would be ashamed for us to sacrifice the significant good we could realize here at the altar of near term revenue – money that will likely be squandered before it is even all collected.

As an increasingly mobile digital society, we can’t afford to let that happen…

The Mainstream: Consumed By The Long Tail…

Share

It’s hard for me to count how many times in general conversations with people my age that quotes pop up from television shows or songs that came out when we were growing up.

There was a focus to the popular media at that time that let it imbued the culture of the day. Humming a few bars from a television theme song could say more about a situation you were in than any words. Media of all kinds seemed to offer a common vocabulary we could draw on to be broadly understood. It carried context because most people were familiar with it.

There just wasn’t a lot of choice back then…

When I grew up, television consisted of the three big networks, two local stations, and PBS (which no self respecting kid would admit to watching). FM radio was gaining traction, but most music was still being listened to on just a couple of AM stations. There were a LOT fewer records being released each year, and they didn’t all sound alike – music seemed to resonate more. We got our news from a local newspaper (or the New York Times on Sundays) and from the evening news with Walter Cronkite.

There wasn’t much media of any type available then , and you could take most of it in without being overwhelmed.

But that has changed today…

Now we have more cable and satellite stations than most people even know they subscribe to. We have two satellite radio networks in addition to the AM and FM radio bands we had before. New records and singles are beings released constantly – pumped out in the “genre of the moment” from people we’ve never heard of before and likely won’t hear much from again. And news bombards us 24×7 from just about everywhere we turn. There is so much flying at us that it all seems to blend together into a wall of noise.

And people are simply choosing to ignore most of it…

People are retreating into their own media ‘comfort zones’ to get away from it all. Devices like iPod’s and Tivo’s have become media firewalls that help folks keep the outside world at bay. Everyone can choose what they want to watch, read, or listen to – and when – independent of what’s being pushed or played in the “mainstream”. Everyone has become their own gatekeeper – and devices like these are the gates. This became possible because of the internet. It bypasses all of the traditional gatekeepers and has now developed into a comprehensive and ubiqious media distribution channel.

And by crossing the Rubicon of on demand access, the marketplace has been fundamentally transformed. The ‘long tail’ has taken hold.

I strongly believe that the emergence of the ‘long tail’ in media consumption is happening at the expense of the mainstream model, not as an extension to it. What we see now with individualization is an evolving phenomena. I believe that as it plays out, it will ultimately consume the mainstream as we know it today. People will pick whatever interests them – be it new or old – and consume it where, when, and how it is most convenient for them.

In the future ‘the mainstream’ will be born out of the long-tail, and look very different. Media will increasingly be created for a loyal, core audience – not ‘mass consumption’. Any broader recognition for it will be driven through interest across numerous emerging social networks (think more Pandora or Digg than MySpace) and content will more democratically receive the scope of visibility and level of attention it deserves. The sharp divide that exists today between niche and mainstream media will blur, becoming a continuum of interest and reach. Content will seek its own level.

And truly global mainstream ‘successes’ will become rare…

These changes will reshape the media landscape, and present incredible opportunities for new players and business models to emerge. They will also challenge us at a social level, further shrinking our common cultural footprint while at the same time removing the borders that typically divided us.

I want to explore these issues more in future posts…

Telecom "Gets It" – They Just Don't Want It…

Share

There was an interesting interview in the Toronto Star with RIM’s co-CEO Jim Balsillie. The article focuses heavily on how RIM views Apple’s entry into the high end ‘smart-phone’ market, and looks at how it could impact the mobile industry.

While the article is interesting at a general level, there was an observation made in it by Mr. Balsille that is really worth paying attention to. When looking at how Apple is taking control over the entire user experience surrounding the iPhone, he cautions:

“It’s a dangerous strategy… It’s a tremendous amount of control. And the more control of the platform that goes out of the carrier, the more they shift into a commodity pipe.”

Clearly, this is something Balsille views as a bad thing. And what he is actually lamenting is something I’ve discussed in the past:

Network Neutrality…

The fact is, carriers are simply commodity providers. They depend on monopoly control of spectrum and physical towers to build out their own “walled gardens” of content, services, and devices. The argument being made here in the wireless space is the same argument being made in the cable space. Or more generally by the Telcom industry as a whole.

The telcom industry clearly gets it – without some form of monopoly control they are just a commodity. They leverage it to limit consumer choice, reduce competition, and bundle services. They live in fear of losing that monopoly advantage. That’s why you see them fight any metro-wifi initiatives that come up across the country – they would no longer be the gatekeeper and toll collector if that were to happen. It’s why WiMax is taking forever to arrive.

Maintain The Monopoly!…

The only reason AT&T gave in to Apple was because they are desperate. They know they have a network in need of serious upgrading and terrible customer support. The iPhone gives them a selling point in spite of that. It buys them time to become competitive, and attracts new customers to fund improvements. And even in as bad a shape as they are, they still managed to get some things in the deal they wanted as well – unbundled SMS, no IM, no VoIP applications, two year commitments or extensions.

Left unchecked, everyone connecting to your home or office would take Jim Balsillie’s observation to heart and do everything they could to ‘lock you in’ so they could be more that “just a pipe”.

If you aren’t sure that network neutrality is really an issue, keep an eye on the wireless carriers. They see where the introduction of the iPhone could lead, and are scared to their core of the possibility. Network neutrality is the last thing any of them want.

And they are gearing up for a war to stop it…

Biometric "Cookies"…

Share

It’s a fact of life today – you are being tracked…

And not just online.

The government has permanent cameras up in most major cities, and at major transportation, entertainment and sporting venues around the country. There are cameras all over most office buildings and stores. There are smart cameras that take your picture at toll booths and major intersections. For the most part, your public life is well documented in stills and videos. These technologies have become a major tool of security and law enforcement.

But the uses for it don’t stop there…

Video/image analysis technology has progressed to the point where it is starting to become interesting to retailers as well. And this interest isn’t for detering shoplifting or petty theft.

Retailers want to use it to track and target consumers…

This space is called biometrics, and there is a lot happening here that should be making folks think.

eyebox2.jpgFor example, a company in Canada named Xuuk, Inc. have released a product called eyebox2 that is designed specifically to track customer eyeballs. Basically, it can detect if someone is looking in it’s direction from up to 10 meters away. When integrated into in-store displays, it allows vendors to see how many people look at specific ads or merchandise, how far away they are when they first see them, and how long they maintain eye contact with them.

This is the physical equivilant of counting click-thru’s…

Another company in the biometric space – Neven Vision – was purchased by Google less than a year ago. Neven Vision specializes in facial recognition – the ability to recognize a specific individual based on an image of their face. These types of systems can work well with simple photographs, as well as dynamic video streams. Google acquired them to explore identifying specific individuals in pictures on their photo sharing service, Picassa, so people could more easily search photo libraries for people they know.

This is still a work in progress for Google…

With the facial recognition technologies available today, people no longer need to be looking directly at a camera to make a good match. Systems today use multiple images from various angles to produce what is effectively a 3-D map of a face. They also supplement that approach with an analysis of specific skin details like scars, moles or other permanant cosmetic attributes. When these methods are combined, even people walking in crowds can be identified at a reasonably high level or accuracy.

It isn’t beyond the realm of possibility that a company like Google that is so focused on the advertising space would see this type of analysis as a natural extension of their business. If they were to combine it with eye tracking technology like Xuuk provides, they could build a profile of what stores specific individuals visited, what items and ads they looked at, where they ate, etc. They would also be able to provide a bridge between online and offline tracking.

Each person would effectively become a walking, (hopefully non-deletable), ‘biometric cookie’ for themselves. They could be identified and tracked everywhere there was a camera.

And now days, that pretty much means everywhere…

This all reminds me of a scene from the movie ‘Minority Report’ where the main character John Anderton(Tom Cruise) walks into a GAP store and is immediately recognized through biometrics (in this case, a retnial scan):

While we aren’t there yet, we are probably much closer to having this capability than most people realize – at least at a basic level. And I feel that even the possibility of being personally tracked through boimetrics everywhere we go should be a concern to us as a society – especially when you consider just how far our current government is willing to snoop into what were traditionally private areas of our lives, and corporations try to target us with advertising.

If privacy isn’t dead already, this would certainly kill it…