A couple of days ago, an unusually honest internal memo from Nokia CEO Stephen Elop revealed that the company is at a crossroads, and that a new smartphone strategy is necessary.
Today, Nokia and Microsoft have officially entered a strategic alliance that makes Windows Phone 7 Nokia’s primary smartphone platform, but also extends into many other Microsoft services such as Bing, Xbox Live and Office.
Furthermore, the two companies will combine many complementary services; for example, Nokia’s application and content store will be integrated into Microsoft Marketplace, while Nokia Maps will be – as Nokia’s press release puts it – at the heart of Bing and AdCenter.
Nokia will also undergo significant changes in operational structure and leadership. As of April 1, Nokia will have two main business units: Smart Devices, led by Jo Harlow, and Mobile Phones, led by Mary McDowell.
Of course, with such significant changes in Nokia’s strategy, one has to wonder what will happen to its other smartphone platforms. Symbian, says Nokia, will become a “franchise platform, leveraging previous investments to harvest additional value,” and MeeGo will be an “open-source, mobile operating system project.”
While Nokia claims it expects to sell approximately 150 million more Symbian devices in the future, it’s obvious that from now on few people will buy Symbian devices because they run Symbian software. It will more likely power Nokia’s mid-range smartphones and feature phones with Nokia’s flagship phones running Windows Phone 7.
Microsoft and Nokia’s leaders are, of course, enthusiastic about the partnership. “We will create opportunities beyond anything that currently exists,” said Nokia CEO Stephen Elop.
What do you think? Was the partnership with Microsoft the right move for Nokia, and vice versa? Please, give us your opinions in the comments.
After a long wait, Microsoft’s Kinect for the Xbox 360 went on sale in the US yesterday among fanfare and dance party in New York’s Time Square and the company is expecting to sell 5 million of its new controller-free gaming systems this holiday shopping season. We have been waiting and waiting for this little marvel – previously called Project Natal – for months.
Though Microsoft has made us wait for another week as the Kinect will hit the European markets on November 10th, we decided to have a look at what the tech gurus are saying about this small motion sensing camera that is expected to change the future of gaming as we know today. While I was expecting over the top reviews about the Kinect, just like we had very optimistic expectations from the controller, I was really put off to see the complete turn around in the opinions of the opinion makers.
Tim Carmody at the Gadget Lab explained how motion detection works in Xbox Kinect and also believes that it is different from a traditional game controller.
“A traditional videogame controller is individual and serial: It’s me and whatever I’m controlling on the screen versus you and what you’re controlling. We might play cooperatively, but we’re basically discrete entities isolated from one another, manipulating objects in our hands.
Kinect is something different. It’s communal, continuous and general: a Natural User Interface (or NUI) for multimedia, rather than a GUI for gaming.
But it takes a lot of tech to make an interface like that come together seamlessly and naturally.”
The other technology analysts seem bit disappointed at the moment though mostly because of the price tag of $150 (£93) just for the controller and $50(£31) each for a single game title.
Ross Miller at Engadget believes the Kinect as hardware is great, but there’s plenty of room for software engineers and UI designers to improve. However, he expects Microsoft will continue to pool resources into improving the experience for a good while.
Comparing Sony’s Move, Nintendo’s Wii and Microsoft’s Kinect, he says “By the numbers, picking up Move starter bundle and an extra controller is the same price, and in that setup you also get a two-player experience. Move’s Sports Champions is arguably a stronger bundled title compared to Kinect Adventures. But really, we feel like both systems — along with Nintendo and the Wii — are just taking a different approach to the same issue. Where does interaction go next? How do you bring it to the living room? Back to the Kinect, though: we think there’s some fighting spirit inside that glossy shell, but it’s definitely got a lot of growing up to do first.”
Confused like hell, Jason Chen at Gizmodo also doesn’t seem much impressed with the game titles available right now and the price tag as he goes on to say “Having only 1 title out of 17 launch games truly do something compelling and new isn’t a very good launch, especially for people who don’t like dance games. Right now, the answer to the fundamental question of “are you having fun with Kinect” is, unfortunately, “not really.” Unless you like dance games. The potential is there, but you need to think of Kinect like the launch of a new console: Wait until the games you really want are available—or maybe even the next generation.”
Similarly, Ben Kuchera at Ars Technica is also critical of the motion sensor’s high price tag with little selection of games. He also believes that it will not remain a button-free controller and “Microsoft is going to release a Move-style controller for the Kinect within a year or so. Both gamers and developers are going to be frustrated by all the things the hardware can’t do, and they’ll demand a way to interact with games using at least one or two buttons. It’s not a coincidence that so many games at launch are in the same few genres, you know. The hardware simply can’t handle much else.”
There’s one exception though, David Pogue at the New York Times seems optimistic and believes that the Kinect, like Xbox 360, will be the saviour for Microsoft in the hardware and concludes, “the Kinect’s astonishing technology creates a completely new activity that’s social, age-spanning and even athletic. Microsoft owes a huge debt to the Nintendo Wii, yes, but it also deserves huge credit for catapulting the motion-tracking concept into a mind-boggling new dimension. Just this once, the gods have lifted the Curse of the Microsoft Hardware.”
While we trust the opinion of these technology big guns, we’ll keep our fingers crossed and wait to get our hands on the “game changer”.
This post originally appeared on mouse2house blog where I regularly contribute on Technology.
It is an unfortunate truth that the glory days of platform trolling are behind us. Where once we had an enormous variety of targets with their many foibles—the legendary user-friendliness and rich capabilities of MS DOS, Apple’s infamous low prices, Windows NT’s svelte size and minimal hardware demands, IBM’s memorable and effective OS/2 marketing campaigns, BeOS’s rich selection of software, Linux’s top-notch hardware support—the computing world of today is so much more boring.
Those features that were once so important to the platform wars—preemptive multitasking, protected memory, and multiuser security, to name a few—are now taken for granted. No mainstream operating system goes without.
Things really took a dive with Apple’s 2005 decision to make the switch to Intel processors. The company’s long history of claiming, in spite of all objective data, that its PowerPC-based systems were not just as fast as x86 machines but substantially OMG-faster came to an end. The glory days of Photoshop bake-offs, those exciting demonstrations where Steve Jobs would strut around on stage and run a specially chosen set of Photoshop filters to show that the hardware he was hawking wasn’t actually godawful, were at an end. After Thinking Different(ly) for so long, Macs were relegated to plain old PCs.
The combination of everyone getting operating systems that weren’t completely horrid and everyone using the same hardware has, therefore, taken a lot of the passion out of the traditional platform wars. Platform warriors have not gone away—they’ve just moved on to the greener pastures of bitching about other people’s smartphone choice: it’s just unthinkable that someone would even consider getting a phone that is and/or isn’t the latest iPhone/Android handset.
This hasn’t stopped Microsoft or Apple from trying to stoke the fires of the platform wars. Apple’s recently ended Mac versus PC campaign went to lengths to paint PCs as buggy, insecure, and just plain dull—albeit harmless and likeable—while for some unfathomable reason choosing to portray Macs as, well, complete asshats. Smug, arrogant, hipster asshats. Honestly, did anyone like Mac? Didn’t you just want to slap him for being a jerk and give PC a great big hug? It was an interesting campaign choice.
Microsoft, meanwhile, has fought back with websites and Facebook pages dedicated to extolling the virtues of Windows PCs and denigrating the Mac OS X opposition, with a healthy mix of truth and BS.
So it’s against this backdrop that we felt it was time to get back to basics in the platform wars. The truth of the matter is, there’s plenty that sucks on both sides of the fence. For all you Ravens out there, we’re going to kick off with a guide to the things that continue to make the world of PCs irredeemably awful, leaving Macs as the only sensible choice. Tomorrow, we’ll tell you why Macs are wretched and overpriced, and Windows is the only realistic alternative if you want to get anything done.
Backwards compatibility: a curse, not a blessing
When Windows NT hit the market in 1993, it was the first ever 32-bit Windows product. As a result, it didn’t have a whole lot of software available for it; new operating systems rarely do. To resolve this obvious problem, Microsoft made Windows NT compatible with the widely used 16-bit Windows, and 16-bit Windows’ partner in crime, DOS.
Of drive letters and DLLs
This compatibility took two forms. Windows NT could directly run programs built for these other systems without having to install virtual machines or use dual booting or anything like that. More insidiously, Windows NT’s new, modern 32-bit API was heavily based on the 16-bit API of its spiritual predecessor. This was done so that developers would have an easy time porting existing 16-bit programs to the new platform—it meant that the number of code changes they had to make would be minimal.
The repercussions of this were many and varied. Some, like the use of drive letters and backslashes, are quite superficial. We might argue that a nicer scheme could be developed for naming disks, and we would probably prefer Windows to use the same forward slashes that URLs do, but both choices work acceptably enough.
Other decisions are more unfortunate. The recent DLL loading flaw exists precisely because today’s Windows follows design decisions made over 20 years ago for 16-bit Windows. The DLL loading behavior made some amount of sense back then (or at least, it was essentially harmless). It’s an absolute liability today.
16-bit Windows had various limitations that made sense in the days of machines with 1MB RAM and floppy disks, but are thoroughly anachronistic on modern machines. For example, 16-bit Windows limited filenames—including the path and drive letter—to a total of 260 characters. Modern Windows doesn’t have that limitation—except in a few places where it does. Software is perfectly capable of creating longer names, up to a total of about 32,000 characters, and for the most part, these longer names will work fine, and are a supported, official capability of the system. Except they don’t work everywhere. The Windows command prompt can’t use them. Windows Explorer will give peculiar errors if an attempt is made to change into a directory with a long filename.
Derp derp derp
A fatal flaw? No, probably not. A huge inconvenience when programs create such long names (even if accidentally)? Definitely. Acceptable in the year 2010? Not in the least bit.
GDI handles and USER objects
Still, at least this is basically harmless. Less harmless are some of the other limits inherited from the 16-bit world. Windows has two limited pools of resources, named “GDI handles” and “USER handles” that are used to reference objects in, respectively, the GDI and USER subsystems, terminology that itself belongs to the 16-bit era. GDI objects are used for graphics. Programs that draw on screen do so using “brushes” and “pens” and “bitmaps” and “fonts”; these are all GDI objects, and all must be referenced using a GDI handle. USER objects are things like windows, and menus, and icons, and mouse cursors.
In 16-bit Windows, it was natural to limit both of these to 216 objects, 65536, of each type, so that these GDI and USER handles were both 16-bit. In 32-bit Windows, and indeed 64-bit Windows, it’s considerably less natural to, er, retain the same limit. To avoid breaking ported 16-bit applications, 32-bit Windows retained the 16-bit limit. Even worse, Windows restricts the number of USER and GDI objects to numbers lower than these theoretical limits.
This might sound like some abstract complaint, but unfortunately, it has real repercussions. The GDI limit (in practice set to just 10,000) is readily attainable in any heavily-used machine, especially as programs that leak handles are unfortunately common. Once the limit is hit, the user experience is decidedly miserable. Programs need to be able to create GDI objects to draw to the screen, and if the limit has been reached, they can no longer do so. The result is that windows stop drawing properly. You might try to open up Task Manager to kill the misbehaving program, but alas, Task Manager needs GDI objects to run. If they’ve run out because the limit has been hit, Task Manager can’t draw its window either.
This might not happen every day, but it does happen, and it’s crap, and it’s crap because core elements of Windows’ design are inherited from 16-bit Windows.
And it infects 64-bit Windows, too
What makes it all worse is that 64-bit Windows has all the same limits even though 64-bit Windows can’t run 16-bit applications at all. When operating in 64-bit mode, x86-64 processors can’t switch to 16-bit compatible mode. They can switch to 32-bit compatible mode, enabling the use of 32-bit software in a 64-bit operating system, and obviously they can use a 64-bit mode, but there’s no 16-bit mode. This means that 64-bit Windows can’t run DOS or 16-bit Windows software. If this is something that you really must do, you will have to use a virtual machine.
And yet, all those same limitations remain present, even for 64-bit native applications. Just as 32-bit Windows was designed to be similar to 16-bit Windows to aid porting, so 64-bit Windows was made into a carbon copy of 32-bit Windows. God forbid that application developers should ever have to fix their broken apps to get them to run on 64-bit Windows. No, far better to make things worse for everyone just so they can hit “compile” and be done with it.
Needless to say, this is not a problem that Macs have to deal with. Apple is not a stranger to backwards compatibility, as its “Classic” and “Rosetta” systems demonstrated. But the company doesn’t drag everything forward forever. Sometimes, the past should be left behind.
Windows: where taste went to die
Windows is, let’s face it, pretty damn ugly. A bit better than it used to be, but ugly all the same. The operating system is ugly. First-party applications are ugly. Third-party applications are ugly. Wherever you look, there’s ugliness.
OK, perhaps it’s not as bad as the Ford Edsel, or Pyongyang’s Ryugyong Hotel (an edifice that offends in so many ways), but endemic to Windows is a consistent lack of style. It’s not a tasteful operating system.
The problem is, I think, twofold. There’s the Windows design aesthetic itself, and then there’s the fact that nothing follows the design rules anyway. Together, these create an eyesore.
The current look, the Aero Glass look, gives each window a faux glass border. The primary motivation behind this decision appears to be because they could. Aero Glass made its debut in Windows Vista, and is used to show off the fact that Windows Vista has a composited desktop that is, behind the scenes, powered by Direct3D. The glass borders are translucent, as if the glass had been sand-blasted, which uses a pixel shader effect. In essence, the whole thing is a tech demo. It is telling the world, “We finally have a modern display layer: look at what we can do!”
But just because you can do something doesn’t mean that you should. Let’s face it: it doesn’t actually look like any piece of glass anyone has ever seen. The reflections, the translucent effect, they all look, well, fake. Even if they looked real, one would have to wonder, what exactly is the point? The window borders are not actually made of glass. So why the pretense?
You might get away with that in a piece of art, but this is supposed to be a functional piece of software. Functionality should come first, and in that regard, Aero Glass does very poorly indeed. The translucence means that any text placed in the title bar is obscured by anything that lies beneath the window. Microsoft has tried to remedy this by adding a white glow beneath the title bar text, ensuring that it does maintain some amount of contrast, but there are limits to how well this works. Some programs opt to extend the glass border and then try to write text onto the glassy parts, and the effect is horrible. A fine case in point is Windows Media Player 11, the version that shipped with Windows Vista. The bottom part of the window, with the playback controls, is all glassified. The result is that the track name indicator is all but illegible.
This is not designed with usability in mind
The borders are also enormously thick. Sure, it’s possible to change them, but by default, they are quite grotesque. Windows 7 has the very handy new Aero Snap feature, allowing windows to be docked to the left and right sides of the screen, but when you do this, there’s a gaping void running down the center of the screen, where the two windows’ borders abut one another. It’s wasteful and unnecessary. I’m not saying that there should be no space around windows; it’s important to make each window distinct from each other and we wouldn’t want the interface to be cramped. But you can have too much of a good thing.
What offends me most of all is the artifice of it all. Operating system windows are not physical objects. They have no physical, real-world counterparts. They are an abstraction. The operating system should be honest about this. Instead of attempting to anchor windows in the real world, the designers should make peace with their unreality. A real piece of glass might need to be thick, and have rounded edges, and might catch reflections, but those are not features—they are constraints imposed by reality. The ornamentation gets in the way. The translucency, for example, is actually less usable and less useful than an opaque window border. Yes, it looks kind of cool, but it doesn’t actually work well.
Microsoft’s new Metro interface, the one found in Windows Phone 7, strenuously avoids the kind of fakery that is endemic to desktop Windows. It uses simple geometric shapes and stylized graphics throughout. Rather than striving to be something that it is not, it is unapologetic and uncompromising about its computerized nature. As a result, it is a lot cleaner and a lot slicker—and further, it is a lot easier for applications to match.
UI design rules: screw ‘em
Which brings us on to the other side of Windows ugliness: applications. The design cues of the operating system may not be perfect, and the operating system sure isn’t consistent about them, but they exist, so for crying out loud, could applications start to respect them?
It goes on
It never ends.
Kill me now.
Some of them are just flat out ugly, but even the ones that have had some effort made to look tolerable don’t look consistent. Even something as simple as a menu bar probably has five or six different renditions, and that’s just counting software from Redmond. Look further afield and the proliferation becomes even worse.
0x8000WTFBBQ is not a useful error message
When Windows Update has failed to install an update, giving me a cryptic message with a hexadecimal number is irritating. When Windows Live Messenger can’t sign in, giving me a hex error is downright infuriating. Tell me what the actual problem is. If the network is crapping out, say so using actual words. If I’ve run out of disk space, tell me in English. Don’t spit out an error code at me.
Sure, I guess in some sense an error code is better than nothing. I can Bing an error code. It’s not normally useful to do—normally I just find a load of other people with the same cryptic message and no idea what it actually means—but OK, I do somewhat appreciate that the detail is there if I really want it. But there needs to be more. These error codes are usually quite specific, so tell me what they mean.
For example, as I’m writing this, my Internet connection has just gone belly up. As a result, Messenger can’t sign in. Fair enough. I understand how the lack of Internet connection would be a problem here. But does it tell me, “I can’t actually contact the Messenger server, your network seems to be down”? No, because that might be useful. Instead, it tells me this:
Oh, I’ll just click that to find out what’s wrong.
If I tell it to show me the “details”:
Brilliant. The “Get more information…” link might be useful, I don’t know. See, it opens a website. And my Internet connection has just gone down. In a program where network errors are something of a big deal—in fact, Messenger has its own little network diagnostic tool that does a bunch of tests to try to figure out network problems—don’t you think relying on the network actually working to deliver you useful information is a bit of a problem? Especially when the software knows there’s a network issue? I sure do. Of course, the website is useless anyway, so it’s all academic.
The most annoying part about these error messages is that many of them are actually pretty specific. Not universally, but a lot of the time, the program has a fairly good idea of what the problem is. So the decision not to tell me anything useful is a deliberate act of laziness.
And it happens on Windows a lot. Microsoft is probably the worst offender, or at least the one who most consistently offends. It’s enormously user hostile. The fact is, things will go wrong. They go wrong on every platform; it’s an unfortunate fact of life. It’s not the errors that are the problem here: it’s the useless error messages.
The hoary old security chestnut
When you’re a big target, people want to bring you down. It’s why Twitter is regularly attacked while simultaneously no one gives a damn about identi.ca, and it’s why spammers and phishers are so incredibly indiscriminate. If you’re an attacker wanting to make some money—and these days it’s all about the money—you aim for the biggest target. And that means aiming at Windows.
Windows doesn’t have a perfect security record, it’s true. Nothing does. But Windows today—Windows 7 and Windows Vista, the modern versions—is not the Windows of yesteryear. It has strong systematic protections against a range of security flaws. Microsoft has developed a range of methodologies to try to minimize security risks in any new development, and they’re paying off. In point of fact, Apple would do well to learn from Redmond; the company has made plenty of boneheaded decisions of its own, and the continued attacks enabling jailbreaks of iOS should be an acute embarrassment. Its record is decidedly spotty. I’ll talk about that more tomorrow.
But the fact remains that if you use Windows, you’re painting a great big target on yourself. You are, by default, under attack. And sometimes those attacks may succeed. If you do things right—install patches when they’re made available rather than weeks or months later, leave the built-in firewall enabled, delete spam and phishing mail, use a current browser version and enable your browser’s anti-malware protection and, yes, install anti-virus software and keep that up-to-date—you’ll tend to be OK. But if you disregard all this, you’ll probably get exploited.
Meanwhile, if you use a Mac, and do, well, nothing, you probably won’t. Again, it’s worth pointing out that this isn’t because Macs are invulnerable. They aren’t. In fact, they’re widely regarded as easier to exploit than Windows machines. It’s just that people are aiming at the operating system with around 95 percent market share rather than the one with around 5 percent.
If Macs became enormously successful, a change in attack profile would inevitably occur. But the threshold is probably quite high. Internet Explorer, for example, has slumped to 60 percent of the market, and Internet Explorer 8, in particular, is pretty robust. Nonetheless, attacks targeting Microsoft’s browser are still more abundant than those aiming at the number two browser, Firefox.
The upshot of it is, PC users have to care about security, at least a little bit. Mac users can be completely cavalier.
Microsoft’s dysfunction and the prevailing aura of meh
“Eating your own dogfood”—or just “dogfooding”—is the practice of using the products that your company develops. It’s an important, valuable thing to do, because it means that you guarantee that your product is actually useful, reliable, and effectively constructed. It means the product works; that you can run a business on it. Traditionally this has been one of Redmond’s strengths. Enterprise products like Active Directory, IIS, SQL Server, and Exchange are indeed dogfooded, and are much stronger for it.
But there are awkward gaps. Windows Vista introduced a raft of new APIs—a whole new sound subsystem, and a new media framework—that Microsoft then virtually ignored in its own software. Windows 7 did the same, with Direct2D and DirectWrite. Designing a good API is hard. Designing a good API when you don’t even have any software that uses the API is even harder.
The result is that the new APIs contain mistakes and bugs that render them less than useful. Sometimes the problems are noticed before the software ships, allowing it to be fixed. For example, Windows Vista’s new sound system was missing features that were essential to high-end audio software. Microsoft did fill in the gaps before Vista shipped, but it caught sound card vendors off guard. The result was they didn’t include driver support for the extra features for many months after the software went on sale.
Sometimes, however, the problems aren’t noticed until much later. Direct2D is an example of this. Direct2D is a good concept. It provides high-performance 2D graphics that can exploit far more of the power of modern video cards than the old GDI API. It was introduced with Windows 7, and is available as an add-on for Windows Vista. Unfortunately, at the time Windows 7 was released, there was virtually no software that used Direct2D. Not surprising in a way, as it was brand spanking new, but it caused a problem: once Redmond did try to use Direct2D, for Internet Explorer 9, their developers found that it didn’t actually work properly.
There was nothing disastrous, but the result is that Internet Explorer 9 requires a bunch of hotfixes to be installed first. These hotfixes are why the browser was originally going to require Windows 7 Service Pack 1, though the company has since relented. Though Service Pack 1 will include the necessary hotfixes, it won’t be required; the standalone hotfixes will work too.
Now, OK, none of this is catastrophic. Things are getting fixed. But it’s annoying for developers—the Mozilla graphics developers have had similar problems with Direct2D—and it points at a greater malaise with the standard PC operating system. Microsoft is, for want of a better word, guessing at what features to add and how to design them, and the result is that it’s adding things that just don’t quite work properly. This is not a tightly written, coherent operating system, one in which every part has a clear function and works correctly.
And this matters. Microsoft does a lot of work to make the operating system better—to make it do more things, more efficiently—but that work is wasted if these new features don’t quite work right. It’s also wasted if nobody actually uses them.
Using Direct2D, for example, certainly makes software superior: it makes it faster, easier to write, more flexible. That’s why Internet Explorer 9 is using it. But by shipping it in a state that was, well, kinda broken, those benefits could never actually materialize.
Not Invented Here syndrome
The company’s Not Invented Here attitude is also a persistent problem. It’s not just that the company reinvents work done by others. It’s that teams within the company reinvent the work done elsewhere within the company. Over and over again. Let’s take something really simple: tabs. Tabs are used in lots of places, like in dialog boxes, Web browsers, and text editors. Microsoft must have invented a dozen different forms of tab.
There are the standard ones used in the file properties dialogs. There are the colorized ones used in Internet Explorer 8, and the non-colorized ones used in Internet Explorer 7. There are the ones used in Visual Studio’s text editor. The ones used in the Expression Blend design tool. The new ones used in the new Windows Live Messenger.
It is infuriating. They all look different. They all work differently. Some of them can be dragged and dropped. Some can be middle-clicked to close. Some can be torn off into new windows. Some let you “close all.” All because every time a team inside Microsoft wants to use some tabs, they don’t say “what does the OS provide” or “what have other teams done that’s good”. They just write their own, and it’ll be different, and probably worse, than the ones used elsewhere.
This rampant inconsistency, inconsistency both in look and in feel, is perhaps the defining characteristic of the Windows experience. The prevailing sense that nobody really cares if it’s any good. No one cares that it all feels right, that it does the same thing in the same, predictable way. No one cares if it looks good. Sure. It all more or less works. But shouldn’t we be striving for something better than that? Shouldn’t it work cleanly and efficiently? Shouldn’t it fill us with joy each time we use it?
There’s certainly a company that thinks that your computer experience should be that way. It’s the one from Cupertino, and I’ll talk about it tomorrow.
On November 20, 1985, a small technology company out of Bellevue, Washington launched a 16-bit graphical operating system for the PC. Originally called Windows Premiere Edition.1, it soon became the foundation for the world’s most prevalent operating system and for one of the most dominant technology companies in history.
Now, 25 years later, Microsoft is a household name and co-founder Bill Gates remains the world’s wealthiest person. Back in 1985 though, there was no guarantee of success or knowledge that Windows would dominate the world. It was the beginning of a revolution in computing.
On Friday, Microsoft Chief Software Architect Ray Ozzie resurfaced some of Microsoft’s history in a recent post on his personal blog. In a sealed packet in his office, he uncovered the original press kit for Windows 1.0 and decided to put the documents online. It’s a fascinating look into the beginnings of computing and into a technology that has fundamentally changed our world.
From the company’s original press release:
Microsoft Windows extends the features of the DOS operating system, yet is compatible with most existing applications that run under DOS. Windows lets users integrate the tasks they perform with their computer by providing the ability to work with several programs at the same time and easily switch between them without having to quit and restart individual applications. In addition, it provides a rich foundation for a new generation of applications.
“Windows provides unprecedented power to users today and a foundation for hardware and software advancements of the next few years,” said Bill Gates, chairman of Microsoft. “It is unique software designed for the serious PC user, who places high value on the productivity that a personal computer can bring.”
Windows 1.0 was the beginning of the Control Panel and the Clipboard, but more importantly it was the beginning of an era that brought personal computing to billions of households worldwide.
Microsoft on Monday announced details regarding Windows Phone 7 Series’ application store, software development kit and user interface.
As leaked documents hinted in February, the Silverlight and XNA programming environments will play major roles for third-party software developers. Microsoft previewed the software toolkits at its MIX developer conference this morning.
“I think we’ve been very clear since we first started talking about [Windows Phone 7 Series] that it represents a sea change for Microsoft,” said Charlie Kindel, manager of Microsoft’s Windows Phone App Platform and Developer Experience program, in a phone interview with Wired.com. “We’ve revamped just about every aspect of how we build phone software, ranging from how we think about customers to how we do the engineering for the product.”