Though we still have a ways to go with development, here is a quick overview of the first product we will be launching – a turnkey solution that integrates Skype into professional video production workflows:
The journey has just begun…
Though we still have a ways to go with development, here is a quick overview of the first product we will be launching – a turnkey solution that integrates Skype into professional video production workflows:
The journey has just begun…
In any ecosystem, there is a natural resistance to change. From simple familiarity to structural interdependencies, many elements converge to support the existence of the status quo. That doesn’t mean that the status quo is ideal or even good – simply that the cost of changing out of that state is more expensive in some aggregate way than the cost of remaining there.
This ‘cost of change’ creates a form of static friction in the system, allowing it to continue in it’s current state even when pressure for change exists. But like at the fault lines that define the boundaries between the continental plates of our planet, this pressure for change will continue to build up until it reaches a level that can no longer be resisted. When that point arrives, the built up pressure gets released in a single, significant shift (an ‘earthquake’ event) that ushers in change – producing a period of instability as the system searches for a new state of equilibrium. This is an unavoidable process in any dynamic, living environment – be it economies, technologies, political systems, or even cultures.
And for those that need to go through these shifts, it can be very scary and painful.
I’ve been thinking about all of this in the context of two key forces that exist in most ecosystems – regulation and innovation. Regulation is typically put in place to ‘raise the cost’ of the system moving in certain directions. Innovation, by contrast, attempts to ‘lower the cost’ of the system in moving in certain directions. While both of them try to influence what the system should look like in a future state, they are fundamentally different in nature and intent.
Regulation can take two forms – proscriptions and mandates. Regulatory proscriptions artificially raise the cost of certain actions through the implementation of penalties. For example, fines can be imposed, businesses licenses revoked, taxes levied, or even people locked up if they try to do certain things that are no longer ‘allowed’. Regulatory mandates can impose similar penalties if certain specific actions are NOT taken, forcing activity to take place that would otherwise not happen on it’s own. Sometime, regulatory mandates will eschew penalties to take the form of incentives – rewarding certain actions by artificially lowering the cost of the system moving in a particular direction.
The whole premise for creating regulations is a belief that we can understand enough about a desired future state to formulate an optimal plan to get there. Because of this, the broader or more far reaching a regulation is, the harder it will be to get right. The more complex an ecosystem is, the harder it will be to change or influence predictably.
What makes innovation different from regulation is that the innovator, unlike the regulator, doesn’t get to impose their view of the ‘best future’ path on the system they operate in. Instead, they need to offer something new and compelling enough that it can lure people away from the status quo. They constantly need to compete with other visions of the future – other potential options that are being offered. This forces them to continuously adapt and improve what they offer, or to drop out of the ‘selection process’.
In an innovation driven system, bad ideas don’t last long. The costs and the benefits of every path tend to be exposed early, and choices made at one point can constantly be reassesed against new alternatives being offered and adjustments made. Nothing is ‘locked in’, lending an efficiency to the process that biases it to positive outcomes.
Regulation, on the other hand, doesn’t provide anywhere near the same clarity. Since regulations aren’t about choices, their benefits can only be measured against the hypothetical end state they were implemented to avoid (‘Millions would have lost their jobs if we hadn’t done XYZ…”). There isn’t an actual alternative path their effectiveness can be measured against, allowing regulators justify them against theoretical extrapolations of past conditions that normally assume nothing else in the system would have changed to provide a different, better outcome. The only time a regulation typically gets challenged is when the damage it does – the only real measurable cost of a regulation – clearly outweighs the perceived benefits of keeping it in place. This biases regulation to negative outcomes.
My point here isn’t that all regulation is bad and all innovation is good – just that it is easier to identify and correct bad innovations than it is to identify and correct bad regulations. Innovations need to prove they are better before being adopted, and are constantly challenged by new ideas moving forward. Once passed, regulations are not normally challenged, and usually need to reach a point where they are demonstrably bad before being thrown out.
This means that regulations tend to create a status quo that is highly resistant to change, making the inevitable dislocations extremely painful when they finally arrive. Innovations tend to reduce the strength of the status quo, allowing change to happen on a more continual, ‘just in time’ basis.
Both have their place, but I strongly believe we need to be looking to innovation as the defining component of our path to a better future.
I was thinking about how software based systems tend to develop over their lifetime, and have come to the sad realization that most developers and systems managers are the digital equivalent of pack-rats.
While many are wizards at adding and extending the features and capabilities of the systems they work with (and in some pretty amazing ways), they can be almost dysfunctional when it really comes to getting rid of code and infrastructure that has outlived its original purpose. Some of the best developed systems around seem to just collect screens, functionality, subsystems, API calls, database tables, etc that – while possibly important a one time – add almost no value to the end user today. That fact that the most significant feature of Apple’s newly released “Snow Leopard” version of their operating system is it’s cleaned up, slimmed down code base speaks volumes about the state of complex code packages these days.
There are lots of reasons systems get fat. Some of it comes from engineers simply over-engineering things and making things more complicated than they really need to be – usually by choosing purity over practicality. A LOT more of it comes from the “need” for companies to continue adding new features to their platforms – no matter how marginal – to generate upgrade revenue and justify support contract costs.
Some of it also comes from designers that like to keep the product fresh, programmers that want to add ‘cool new things’ they are interested in, and sale folks that push for one-off additions to try and win new sales.
When it comes to bloat, there’s plenty of blame to go around.
But wherever it comes from, all of this extra code (and the infrastructure that goes into supporting it) typically ends up surviving release after release. And while there may be someone out there that is actually still using it, support for marginally used functionality comes at a steep price. Some areas impacted by this are:
This boils down to one simple thing: the need to a more disciplined approach to designing systems. Designers need to place the same value to pruning marginal features from a release that they do adding new ones to it. They need to know their clients, know their markets, and have the guts to make the near term tough calls that will result in a better product for everyone over time.
I love home automation systems. Receiving status and having control of my entire house – all through a single interface – is something I’ve been dreaming of for a long time. While I have made some progress in this area, I’m still operating the technology in my house with several disconnected control interfaces (lighting, HVAC, and AV systems) – and I do still need to make the occasional visit to the drawer full of remotes to get certain things to work.
There are some control systems available in the marketplace that attempt to provide a ‘whole house’ control experience. However, beyond being expensive to customize and install, they tend to have pretty primitive control interfaces (mostly virtual menus and buttons on a touch screen) and are not very intuitive to operate – even for technically inclined people.
Well the folks from the Media Interaction Lab at the Upper Austria University of Applied Science have come up with a new control interface design called CRISTAL – Control of Remotely Interfaced Systems using Touch-based Actions in Living spaces. It is built around a multitouch table (reminiscent of Microsoft’s Surface computing device) that presents a live room image controllable via multi-touch. This video provides a good overview of how the system operates:
This looks like the type of system I would love to have in my house. Touch based, intuitive, and with one panel controlling everything. Even though this is just a lab based demonstration, all of the technologies it is built on already exist in the consumer marketplace today. I could see something like this going commercial in the next few years if it can get a big enough backer behind it to launch it with the scale it needs to be successful. Both Apple and Microsoft comes to mind as potential providers of this kind of home experience. In fact, if Apple does release a tablet at some point in the near future, I believe providing this kind of control surface will be one of the motivating factors.
This is definitely the kind of experience people are looking for. It’s not about navigating menus and touching buttons. It’s about interacting with intuitive proxies for the environment around them. Without a doubt, the army of traditional remotes most people struggle with today have overstayed their welcome.
And no one I know would be sorry to see them go.
The success of what has become know as the “Cash For Clunker’s” program got me thinking about the power of incentives to effective behavior.
The program – officially called Car Allowance Rebate System (CARS) – was passed by congress and implemented by the Department of Transportation. A download from their website describe the program as follows:
The Car Allowance Rebate System is a new program from the government that will help you pay for a new, more fuel-efficient vehicle from a participating dealer when you trade in a less fuel efficient one.
While this program has had the benefit of stimulating new car sales, it really hasn’t done enough to achieve it’s key objective – improving fuel efficiency and reducing greenhouse gas emmisions. New cars being purchased through this program only need to have a fuel economy rating of 22 miles per gallon – 3 MPG below the currently mandated 25 MPG average that an automaker’s entire fleet must achieve. This is simply not a very impressive goal for a program that taxpayers are being asked to underwrite – especially when it is being sold to the public as a green initiative and not an industry bailout.
The concept behind the program is a good one, and is worth pursuing. But the current program is very expensive, and won’t achieve the results an initiative of this size should. I would like to see a new version of this program set up to replace it that leverages both incentives and disincentives, is simpler to administer, and reduces the burden to taxpayers.
The new program would be based around how a new vehicle’s milage compares to the currently mandated fleet average. Based on today’s average, 25 MPG would be the initial benchmark. Any car with better gas mileage would receive an incentive rebate of $1000 for each 10% of MPG improvement it has over that benchmark. The total rebate would cap out at $5000. Helping to subsidize this rebate would be a tax on vehicles falling below the 25 MPG rating. This will take a similar approach of adding $1000 to the cost of a vehicle for each 10% of MPG it falls below that average mark. It would be capped at $5000 or 10% of a vehicle’s total cost – whichever is less. The benchmark could potentially rise each year. If the actual fleet average MPG from previous year end’s up being higher than the benchmark 25 MPG, it will become the new benchmark for the next year. If it is lower, the current benchmark would remain unchanged.
A program like this – with escalating incentives and disincentives – has the potential to shift consumer interest from larger cars over to hybrids and other alternative fuel technology vehicles. It can also shift the focus of the auto manufactures as well. The market advantage for any company that ‘out-innovates’ its competitors in the fuel efficiency area could be significant, and the penalty for falling behind could be severe. This will force every auto company to invest more and prioritize innovations in technologies that will keep them competitive in this space. They simply can’t take on the risk of neglecting it.
As for getting rid of the clunkers that are out there on the road today, I would create a standing offer of $1500 for any insured car in working condition regardless of age or particulars – no new car purchase required. That could put cash into people’s pockets that they could spend anywhere, or save for a rainy day. Any business could be approved to accept these vehicles – not just car dealers. They could receive a $500 processing fee, and would need to follow the same terms around destroying the vehicles and recycling them for scrap.
By being marketplace driven, these approaches could end up improving fuel efficiency standards far more effectively than any government mandated standard could. Not only that, they could also help the US auto industry begin to produce vehicles that are globally competitive.
As a nation we need to become leaders in green technology. It is not just an imperative from an ecological perspective, but also from an economic one. Success in green tech will define the winners over next 25 years in the same way that success in digital technology has defined the winners over the last 25.
It’s too important for us – and for the world – to just pay lip service to it.
There’s been a lot happening recently in the ebook marketplace..
In the update I wrote a couple of weeks ago on the Amazon Kindle, I commented on the price of readers:
…the recent Kindle price cut, bringing it down to $299, is another step in the right direction. Though I personally believe it will need to move below $100 to really start to gain mainstream traction, breaking below the $300 price crosses a psychological threshold that makes it easier to bring in that next level of interested buyer.
Well Sony just moved the bar a little closer to that tipping point price, breaking the $200 barrier today with the announcement of their new Reader Pocket Edition. It has a somewhat smaller 5″ e-ink screen and can hold about 350 books. Sony has also announced that they will be matching Amazon’s price of $9.99 for recent best sellers.
Though it doesn’t have any way to let you buy books wirelessly like the Kindle does, the Reader Pocket Edition does cost $100 less – probably a fair trade off for many people. By having an under-$200 reader, along with lower ebook prices, Sony should be able open up the ebook market to a much wider audience. This is a significant move down the price curve, and will hopefully keep pressure on Amazon to continue moving their own prices lower.
Another bit of good news in the ebook world is that Barnes & Noble has decided to jump back into the business. For those that don’t remember, B&N was the supplier of digital books for the pioneering Nuvomedia Rocket eBook in the early 2000′s. After acquiring Fictionwise earlier this year (relaunched as ereader.com), they are now starting to pull their digital book strategy together. Unlike Amazon and Sony, B&N decided not to launch their own dedicated ebook device to go along with their new digital store. Instead, they are starting out by releasing a free software reader that runs on both the iPhone and iPod Touch, with an eye toward partnering with potentially multiple ebook device makers in the near future. This could be an excellent strategy for them. Given the lead Amazon and Sony have in this market, it makes sense for B&N to become the ‘open platform’ in the ebook world with the broadest choice of reading options available. Backing up this effort, they have launched with a significant number of ebooks already available for sale, as well as around 500k free public domain books available for download. Though not as splashy as Amazon’s launch of the Kindle 2, B&N has made a very credible entry into the ebook market here. And like Amazon, they have the buying power and focus necessary to evolve, become successful, and turn this into a viable component of their overall business.
With three major competitors in the ebook space (and many smaller ones as well), it’s clear that this market isn’t going to fade away this time like it has in the past. Between the introduction of new reading devices and the continually expanding catalog of books now available in digital format, the ebook business shows every sign of being a young, healthy consumer product segment.
But there could be one big shakeup coming in the near future.
While there isn’t a lot of clarity around their intentions, Apple is shaping up to become a possible “800lb Gorilla” in the ebook space. With mobile reader apps available from both Amazon and B&N, the iPhone/Touch already offers a great platform for digital book readers. Rumors are also making the rounds that Apple with be launching a color “tablet device” with a 10″ screen – a general portable media platform that could easily include ebooks in the mix. What lets Apple cast such an long shadow over this space is the power of their iTunes ecosystem. They have the store. They have the desktop footprint. They have the device footprint. The introduction of a larger form factor ‘tablet device’ could place them in the perfect position to subsume the ebook market within the already significant digital media market they dominate today.
While even the launch of a new device from Apple is only speculative at this point, their ability to dominate a market has a clear precedent in the way they have moved from music, to audio books, to podcasts, to television shows, and recently to movies. They started out small in each of these areas, but over time have managed to become the dominating force in all of them.
Whatever ends up happening with Apple, it’s great to see so much new activity going on in the ebook space. It seems to be moving into the mainstream on the consumer side even faster than I thought it would.
Hopefully publishers will take note and finally start to ramp up their digital efforts.
It feels like the tipping point is finally getting close.
Fred Wilson has a great post on his blog this morning about the semantic web (Making The Web Smarter). Beyond the mention of my company InfoNgen, it also provided an interesting perspective on the how the web is evolving in practice. This is a subject I’m passionate about, so I couldn’t pass up the opportunity to throw in my two cents.
With InfoNgen, I spend a great deal of time thinking about potentially new and innovative ways to analyze and classify content – including a broad range of web based content. Without a doubt, the research going on around the semantic web is some of the most interesting in this field. While there has been some really exciting progress in applying this research to many constrained information domains, creating this self-describing, intelligent network of information on an “internet wide scale” is still an incredibly daunting task.
And as Fred points out, it isn’t one we are making a lot of progress in.
I am struck by the similarities between the efforts happening here, and the work that took place from the 70′s to the early 90′s in the field of artificial intelligence. In computer circles, A.I. was the cutting edge discipline of it’s day. Until the arrival of the Internet, it was a magnet for creative engineers and scientific talent. People saw it as the next great revolution in technology. Encouraged by successes like chess playing computers that could beat grand masters and medical expert systems that demonstrated real value in clinical situations, expectations were high that we would soon see computers that would be able to interact with us conversationally – personal assistants that could carry out spoken directions and provide us with relevant advice and information. This video – done by Apple in 1987 – is a great example of what people were hoping computers would soon be able to do for them:
It’s more than 15 years later, and we’re still a very long way off from the promise shown in this video.
Today’s efforts to create the foundation of the semantic web are in some ways like a reemergence of artificial intelligence – but now repackaged for a web centric world. Many of the concepts and technical disciplines that were sitting behind A.I. – Bayesian inference, natural language processing, weighted decision trees, classifiers, and knowledge bases just to name a few – are now in some form or fashion powering various commercial and open efforts to realize the semantic web. And while they do share a common set of technologies, that doesn’t mean they need to share a common fate.
But to be successful, things will need to start coming together in a different way.
This time around, these technologies will need to leverage the core social fabric inherent in the web architecture. Analysis needs to be pushed out to the edge and become an integral and interactive part of the content creation process. This would not only be able to suggest tags or other meta level markups, but also offer potential summaries for quick display, highlight ambiguous terms or content blocks for refinement, and suggest unique topical terms that could be included in the content to improve discoverability. The human generated editorial insights that exist in trusted content sets need to be leveraged to mine for relationships in other content sets that exist more broadly. (Fair use/copyright law will need to be updated and clarified keep up with innovations in this area.) Most importantly, the creation of public databases, taxonomies, and ontologies need to become a priority for open source efforts, potentially leveraging a DBpedia style model of publication and quality control. Freely available datasets will be the fuel that powers many of these efforts going forward. Overall, any successful approach here needs to blend the things people do well with technologies that can amplify and extend it, producing something neither could accomplish well on its own.
With all of that said, I’m not naive. I don’t believe we will ever have a truly global, harmoniously classified semantic web. There are simply too many perspectives to rationalize in a way everyone can agree on, and too many people looking to game the process for their own gain. The Utopian model discussed academically is really an idealized goal that isn’t achievable on a practical level. But I strongly believe that it will be possible to offer to the broad web community the same improved web experience currently provided by vertically focused solution providers like InfoNgen. Meaningful progress at this level will require more than the isolated technological breakthroughs of any single company or organization. Though it can be anchored around the same core semantic concepts, getting the scale and scope needed to succeed here will require some kind of cooperative framework to share and enhance the currently disconnected efforts and innovations that are taking place today. Without having some mutually beneficial relationship exist between the various commercial and open sourced initiatives, it is likely that the global semantic web will end up hitting the same kind of wall that the original efforts in A.I. did.
While a technical discussion of the various solutions in this space may be interesting, the end goal of the semantic web is to make it easier for for individuals and organizations to discover and apply information that is relevant to them. This means that access to content needs to become more flexible, and conform to the variety ways people may think about it and want to consume it. This is in sharp contract to the traditionally rigid way publishers have wanted to package and present it in the past.
None of this will be easy, but getting publishers to embrace this kind of change may be the biggest challenge of all.
It will be great to finally see a truly web based operating system released…
Though there is still a great deal unknown about Google’s Chrome OS, it will likely be the next logical step in operating system development: a rich edge-based footprint for web centric computing. If combined with their recently unveiled unified messaging environment Google Wave, Chrome OS will offer a fairly unique and attractive user experience. By providing a slimmed down set of local services to cleanly extend open web standard support – without the need for any legacy support – Chrome OS should be able to offer some significant performance benefits vs. Windows. Here’s what Google said about it in their own recent announcement:
Speed, simplicity and security are the key aspects of Google Chrome OS. We’re designing the OS to be fast and lightweight, to start up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don’t have to deal with viruses, malware and security updates. It should just work.
I have no doubt that Google will try to make Chrome OS a fairly complete solution out of the box. They can certainly roll together all of their own web applications with popular 3rd party web apps to cover most of the key functionality people would look to have when they power a system on. I also expect that Google will extend their Android “App Store” and fold it in to this new OS. This would let new applications download and install just like browser plug-ins instead of like traditional windows applications. If Google can combine that simplicity with ‘instant on’ functionality, Chrome OS will offer a clearly differentiated computing model from any of the “old-school” operating systems.
This is an exciting and important move by Google. Microsoft’s “Windows” is the crown jewel of tech industry franchises. Even for a company the size of Google, grabbing just a small piece of Windows total market share – even an overlapping piece – would be significant. Chrome OS has a lot of potential here.
While the move to a web centric operating may appear conceptually correct and even inevitable, Google will still need to overcome a lot of challenges if they want to make Chrome OS a success:
At this point, Google’s Chrome OS is just an idea with potential. It’s success will depend on focus, attention to detail and flawless execution. They will need to articulate clearly how this fits in with their seemingly competitive investment in Android, and actively work with partners in the market place to make sure support is there for it on launch day. Even though Chrome OS will be open sourced upon release, Google needs to take ownership of getting penetration in the market. This is different from any other product they have launched. Google will be asking people to depend on Chrome OS for everything they want to do, and will even need to convince new system buyers to bet their entire purchase on it. It needs to be a complete, fully functional, well supported offering.
I’m excited to see how well Google rises to the challenge…
Regardless of how you may feel about about the pervasive coverage of the passing of Michael Jackson, this event has served as yet another reminder of just how dominant a role the internet now plays in the distribution of news and media. It’s the first channel many people turned to to find out what was happening as that story rapidly developed.
Ironically, these types of events also remind us of the many limitations that still remain around web content delivery, and the broad challenges the web will face in supporting the kinds of things we are expecting it to support one day.
So how did the web hold up?…
As events unfolded, the LA Times – who broke the Jackson story – had its website crash after several million visitors hit it in less than an hour. Many other major news sites slowed down significantly due to high volume. Even Google, a company used to dealing with access on a massive scale, had problems handling the load of people searching on its Google News section for information about Jackson’s death. And Twitter, a site that has effectively become the web’s real-time “newswire”, was running at least 5 minutes behind in getting tweets posted. While not an infrastructure disaster, this event certainly pushed most news/gossip sites close to the edge of their capacity, and not many had a graceful way to degrade.
Problems seemed to be even worse for the mobile web. Unaware of what was going on that night, I had posted the following on Twitter:
This wasn’t the first time I have had problems connecting to the web wirelessly through AT&T, but clearly it was more than just typical AT&T issues that ended up causing it this time. In general, I believe the adoption of web enabled mobile devices is outpacing even the fairly aggressive growth in mobile data capacity. Combine that with both a spike in demand and many unresponsive news sites, and the results were no doubt frustrating for many others as well.
The other big traffic spike happened this past Tuesday.
Many sites decided to set up live streams of the memorial service held for Michael Jackson. Though it didn’t go off completely trouble free, Akamai, the leading video distribution/streaming provider on the web, ended up serving about 20 million live video streams during that time frame. That is nearly 10 times the number of streams they typically handle – by any measure a huge spike. While this number is way short of the 100 Million+ viewers that watch live events like the Superbowl, it still represents around 3 times the audience that watches a typical top rated TV show every week. This was impressive.
Unlike the bit starved mobile web, the wired web didn’t seem to have an issue with overall bandwidth or routing. There were no reports of general slowdowns or serious bottlenecks occurring because of this event, which is great news. What didn’t seem to scale up as well were individual sites. Some of those issues could probably be addressed with a more aggressive adoption of cloud based site deployments. If designed correctly, cloud-based deployments could help these types of sites scale capacity dynamically to better handle unexpected spikes in demand. In effect, this is exactly what all of the major news organizations did by using Akamai to deliver their video, an aspect of their delivery that seemed to work pretty well. They produced and packaged the content itself and then leveraged Akamai’s shared global infrastructure to handle delivery – something they would never be able to do well on their own.
We need to start thinking differently about how to build out the web going forward, especially around the optimal use of shared vs proprietary resources. I also think we are probably getting to the point where we should look more closely at what role fundamental network management technologies like QOS, packet prioritization, deep packet inspection and traffic shaping should play on the internet, and how we can make sure they aren’t abused. This is an issue I am somewhat torn about. I am a big proponent of network neutrality, but recognize the very real negatives of an ‘all packets are equal’ approach to managing traffic. Though it tends to be an issue that evokes passion from all sides, we will need to have a rational, dispassionate discussion about it if we are serious about making the web into a truly global media backbone – something it has the potential to become.
Events like these remind us that we still have a lot more to do.
It should come as no surprise to anyone that the folks that developed the Opera browser have been hard at work on something new and different. After all, IE, Firefox, Chrome, and Safari pretty much have innovation in the pure browser space covered.
Last week, Opera Software announced the result of that effort – a browser based collaboration platform called Opera Unite. Here is the video they put together to introduce their new offering:
I’m really torn about Unite. While I’m a big proponent of seeing choice, capability, and control pushed out to the edge of the web, I’m just not sure how well Unite will be able to deliver on this promise in practical terms.
First, there are three big marketplace trends that are riding against this.
These trends tap in to the way people work and interact with technology. Overcoming them will require Unite to deliver something so compelling and unique that people would be willing to go out of their way to adopt it. Any hope for that would most likely to come in the form of applications people develop on the Unite platform.
And that is a tough position for Opera to be in.
On top of that, there are also a few significant technical issues that can weigh on adoption of an offering like Unite. The two biggest ones I see are in architecture and security:
With all of this said, I really do like the concept being promoted by Opera Unite. True edge based connectivity has the potential to change the nature of many things we do on the web. Creating a common platform for social applications is also a compelling concept. Unfortunately, it think these ambitious goals are simply too big for any single company to take on alone.
For Unite to be successful on a broad level, I think that Opera will need to make it open source, and let the market work through the myriad issues that would have otherwise conspired to thwart a single vendor approach. Alternatively, they could package it as an internal corporate collaboration solution, and develop a more conventional business model around selling it.
Unfortunately, I don’t think Opera Software isn’t planing on doing either of these things. And while I would love to see a positive outcome for Unite, I just don’t see success coming from the path they are on.
I love technology. From talking about it, using it, or creating it, to building companies based on it, it's what motivates and inspires me. It's an adventure that never grows old.
This blog is my way of sharing that passion for technology with all of you.
Thanks for joining me.
Though we still have a ways to go with development, here is a quick overview of the first product we will be launching – a turnkey solution that integrates Skype into professional video production workflows: I put this short video together to give potential investors a picture of what we’ve been working on, and decided [...]
In any ecosystem, there is a natural resistance to change. From simple familiarity to structural interdependencies, many elements converge to support the existence of the status quo. That doesn’t mean that the status quo is ideal or even good – simply that the cost of changing out of that state is more expensive in some [...]
Sign up to receive email updates on new posts, videos, and events: