Well, 201 lines to be exact. How fool I was.
Short story: we have a strange TIFF file. There has to be an image somehow stored there, but double clicking on it gives nothing. By the way, this file, together with a million more of them, contains the entire document archive of a company. Some seven years ago they purchased a package to archive digitized versions of all their paper documents, and have been dutifully scanning and archiving all their documents there since then. After doing the effort of scanning all those documents, they archived the paper originals off site, but only organized them by year. Why pay any more attention to the paper archive after all? In the event of someone wanting a copy of an original document, the place to get it is the document archiving system. Only in extreme cases the paper originals are required, and in those cases yes, one may need a couple of hours to locate the paper original, as you have to visually scan a whole year of documents. But is not that of a big deal, especially thinking about the time saved by not having to classify paper.
All was good during these seven years, because they used the document viewer built into the application that works perfectly. However, now they want to upgrade the application, and for the first time in seven years have tried to open one of these files (that have the .tif extension) with a standard file viewer. The result is that they cannot open the documents with a standard file viewer, yet the old application displays them fine. Trying many standard file viewers at best displays garbage, at worst crashes the viewer. The file size is 700K in size, the app displays them perfectly, so what exactly is there?
Some hours of puzzling, a few hexdumps and a few wild guesses later, the truth emerges: the application is storing files with the .tif extension, but was using its own "version" of the .tif standard format. Their "version" uses perhaps the first ten pages of the .tif standard and then goes on its own way. The reasons for doing this could be many, however I always try to keep in my mind that wise statement: "never attribute to malice what can be adequately explained by incompetence"
The misdeed was, however, easy to fix. A quite simple 200 line C program (including comments) was able to extract the image and convert it to a standard file format. At least on my Linux workstation.
I was very happy with the prospect of telling the good news to the business stakeholders: your data is there, you've not lost seven years of electronic document archives, it is actually quite easy and quick to convert these to a standard format and you can forget about proprietary formats after doing that. However, I then realized that they used Windows, so I had to compile the 200 line C program in Windows just to make sure everything was right.
Checking the source, I could not spot any Linux specific things in the program, all appeared to be fairly vanilla POSIX. However what if they are not able to compile it, or the program does something differently? This is one of the moments when you actually want to try it, if only to be absolutely sure that your customer is not going to experience another frustration after their bitter experience with their "document imaging" system and to also learn how portable your C-fu is across OSs. Too many years of Java and PL/SQL and you get used to think that every line of code you write has to run unchanged anywhere else.
So I set myself to compile the C source in Windows before delivering it. That's where, as most always, the frustration began. The most popular computing platform became what is now, among other things, by being developer friendly. Now it seems that it is on its way to become almost developer hostile.
First, start with your vanilla Windows OS installation that likely came with your hardware. Then remove all the nagware, crappleware, adware and the rest of things included by your friendly hardware vendor in order to increase their unit margins. Then deal with Windows registration, licensing or both. Then patch it. Then patch it again, just in case some new patches have been released between the time you started the patching and now that the patching round has finished. About four hours and a few reboots later, you likely have an up to date and stable Windows instance, ready to install your C compiler.
Still with me? In fairness, if you already have a Windows machine all of the above is already done, so let's not make much ado about that. Now we're on the interesting part, downloading and installing your C compiler. Of course, for a 200 line program you don't need a full fledged IDE. You don't need a profiler, or debugger. You need something simple, so simple that you think one of the "Express" editions of the much renowned Microsoft development tools will do. So off we go to the MS site in order to download one of these "Express" products.
So you get here and look at your options. Now, be careful, because there are two versions of VS Express 2012. There's VS Express 2012 for Windows 8 and there's VS Express 2012 for Windows Desktop, depending if you're targeting the Windows store or want to create... what, an executable?. But, I thought Windows was Windows. In fact, I can run a ten year old binary on Windows and will still work. Oh, yes, that's true, but now MSFT seems to think that creating Windows 8 applications is so different than creating Windows Desktop applications that they have created a different Express product for each. Except for paying VS customers, who have the ability to create both kinds of applications with the same product. Express is Express and is different. And you don't complain too much, after all this is free stuff, right?
As I wanted to create a command line application, without little interest in Windows Store, and without being sure of whether an inner circle of hell awaited if I choose one or the other, I simply choose VS Express 2010. That will surely protect me from the danger of accidentally creating a Windows Store application, or discovering that command line apps for example were no longer considered "Windows Desktop Applications" You may think that I was being too cautious or risk averse at this point, but really, after investing so much time in compiling a 200 line C command line utility in Windows I was not willing to lose much more time with this.
Ah, hold on, the joy did not end there. I finally downloaded VS 2010 Express and started the installation, which dutifully started and informed me that it was about to install Net 4.0. How good that the .Net 4.0 install required a reboot, as I was starting to really miss a reboot once in a while since all the other reboots I had to do due to the patching. At least the install program was nice enough to resume installation by itself after the reboot. Anyway, 150 MB of downloads later, I had my "Express" product ready to use.
What is a real shame is that the "Express" product seems to be, once installed, actually quite good. I say "seems" because I did not play with it much. My code was 100% portable in fact, and it was a short job to discover how to create a project and compile it. Admittedly I'm going to ship the executable to the customer the build with debug symbols, as I was not able to find where to turn off debug information. Since the program is 30K in size, that's hardly going to be a problem, and if it is, it's 100% my fault. To be honest, I lost interest in VS Express 2010 quickly once I was able to test the executable and verify that it did exactly the same as the Linux version.
But the point is, in comparison, I can build a quite complete Linux development environment in less than two hours, operating system installation included, incurring in zero licensing cost and using hardware much cheaper than the one needed to run Windows. Why is that to create a Windows program I need to spend so much time?
What happened to the "developers, developers, developers" mantra? Where is it today? Anyone old enough can remember the times when Microsoft gave away free stacks of floppy disks to anyone remotely interested in their Win32 SDK. And those were the days without internet and when CD-ROMs were a luxury commodity. And the days when IBM was charging $700 for their OS/2 developer kit. Guess who won the OS wars?
Things have changed, for worse. Seriously, Microsoft needs to rethink this model if at least they want to slow their decline. At least, I guess I've discovered one pattern that probably can be applied to any future OS or platform. Today, to write iOS/MacOS programs you need to buy a Mac and pay Apple $100. The day it becomes more difficult, complex, or expensive (as if Apple hardware were cheap), that day will be the beginning of the end for Apple.
Sharing real world experiences on database tuning. A place to think about and discuss database performance tuning. Have fun.
Monday, 11 February 2013
Tuesday, 5 February 2013
The results of my 2012 predictions - 3 wrong, 8 right
A bit late, but time to review what has happened with my 2012 predictions. Since the score is clearly favorable to me, please allow me the time to indulge in some self congratulation, and offer also my services as a technology trend predictor at least better than big name market analysis firms. No, not really. But nonetheless having scored so high deserves some self appraisal, at least.
JVM based languages. I was plain wrong on this one. I thought that the start of Java's decline would give way to JVM based alternatives, but those alternatives, while not dead, have not flourished. Rails keeps growing, PHP keeps growing and all kind of JavaScript client and server based technologies are starting to gain followers.
As for compuer security... well, the shakeup in the industry has not happened. Yet. I still think that the most of the enterprise level approach to security is plain wrong, focused more on "checklist" security than on actual reflection of the dangers and implications of their actions. But seems that no one has started to notice except me. Time will tell. In the end, I think this one was more of a personal desire than a prediction in itself.
Javascript. Flash is now irrelevant. Internet connected devices with either no Flash support at all or weak Flash support have massively outnumbered the Flash enabled devices. jQuery and similar technologies now provide almost the same user experience. Yes, there are still some pockets of Flash around, notably games and the VMWare console client, but Flash no longer is the solution that can be used for everything.
NoSQL. I don't have hard data to prove it, but some evidence -admittedly a bit anecdotal- from its most visible representative, MongoDB, strongly suggest that the strengths and weaknesses of each NoSQL and SQL are now better understood. NoSQL is no longer the solution for all the problems, but a tool that, as any other, has to be applied when it is most convenient.
Java. I have to confess that I did not expected Java to decline so quickly, but as I said a year ago, Oracle had to change a lot to avoid that, and it has not. The latest batches of security vulnerabilities (plus Oracle's late, incomplete and plain wrong reaction) have finally nailed the coffin for Java in the browser, no chances of going back. A pity, now that we have Minecraft. On the server side, the innovation rate in Java is stagnated and the previously lightweight and powerful framework alternatives are now seen as bloated and complex as their standards derived by committee brethren.
Apple. Both on the tablet and mobile fronts. Android based alternatives already outnumber Apple's products in volume, if not in revenue. And Apple still continues to be one of the best functioning marketing and money making machines on the planet.
MySQL. This one really is tied down again to Oracle's attitude. But it has happened, both for the benefit of Postgres and the many MySQL forks (MariaDB, Percona, etc) that keep in their core what made MySQL so successful.
Postgres. In retrospect, that was easy to guess, given the consistent series of high quality updates received in the last few years and the void left by Oracle's bad handling of MySQL and the increasingly greedy SQL Server licensing terms.
Windows Phone. Again, an easy one. A pity, because more competition is always good. As with Winodws 8, it remains to be seen if Microsoft can -or want to- rescue this product from oblivion.
Anyway, this prediction exercise left my psychic powers exhausted. Which is to say, I don't have that many ideas of how the technology landscape will change during 2013. So as of today, the only prediction I can reliably make is that there won't be 2013 predictions.
The bad
Windows becoming legacy. I was wrong on this one, but only on the timing. Microsoft's latest attempt to revive the franchise is flopping on the market, to the tune of people paying for getting Windows 8 removed from computers and replaced by Windows 7. Perhaps Redmond can reverse the trend over time, perhaps Windows 9 will be the one correcting the trend. But they have already wasted a lot of credibility, and as time passes it is becoming clear that many pillars of the Windows revenue model are not sustainable in the future.- Selling new hardware with the OS already installed worked well for the last twenty years, but the fusion of the mobile and desktops, together with Apple and Chromebooks are already eroding that to a point where hardware manufacturers are starting to have the dominant position in the negotiation.
- The link between the home and business market is broken. Ten years ago people were buying computers essentially with the same architecture and components for both places, except perhaps with richer multimedia capabilities at home. Nowadays people are buying tablets for home use, and use smartphones as complete replacements of things done in the past with desktops and laptops.
- On the server side, the open source alternatives gain credibility and volume. Amazon EC is a key example where Windows Server, however good it is, it is being sidetracked on the battle for the bottom of the margin pool.
JVM based languages. I was plain wrong on this one. I thought that the start of Java's decline would give way to JVM based alternatives, but those alternatives, while not dead, have not flourished. Rails keeps growing, PHP keeps growing and all kind of JavaScript client and server based technologies are starting to gain followers.
As for compuer security... well, the shakeup in the industry has not happened. Yet. I still think that the most of the enterprise level approach to security is plain wrong, focused more on "checklist" security than on actual reflection of the dangers and implications of their actions. But seems that no one has started to notice except me. Time will tell. In the end, I think this one was more of a personal desire than a prediction in itself.
The good
Mayan prophecy. Hey, this one was easy. Besides, if it were true, I won't have to acknowledge the mistake on a predictions result post.Javascript. Flash is now irrelevant. Internet connected devices with either no Flash support at all or weak Flash support have massively outnumbered the Flash enabled devices. jQuery and similar technologies now provide almost the same user experience. Yes, there are still some pockets of Flash around, notably games and the VMWare console client, but Flash no longer is the solution that can be used for everything.
NoSQL. I don't have hard data to prove it, but some evidence -admittedly a bit anecdotal- from its most visible representative, MongoDB, strongly suggest that the strengths and weaknesses of each NoSQL and SQL are now better understood. NoSQL is no longer the solution for all the problems, but a tool that, as any other, has to be applied when it is most convenient.
Java. I have to confess that I did not expected Java to decline so quickly, but as I said a year ago, Oracle had to change a lot to avoid that, and it has not. The latest batches of security vulnerabilities (plus Oracle's late, incomplete and plain wrong reaction) have finally nailed the coffin for Java in the browser, no chances of going back. A pity, now that we have Minecraft. On the server side, the innovation rate in Java is stagnated and the previously lightweight and powerful framework alternatives are now seen as bloated and complex as their standards derived by committee brethren.
Apple. Both on the tablet and mobile fronts. Android based alternatives already outnumber Apple's products in volume, if not in revenue. And Apple still continues to be one of the best functioning marketing and money making machines on the planet.
MySQL. This one really is tied down again to Oracle's attitude. But it has happened, both for the benefit of Postgres and the many MySQL forks (MariaDB, Percona, etc) that keep in their core what made MySQL so successful.
Postgres. In retrospect, that was easy to guess, given the consistent series of high quality updates received in the last few years and the void left by Oracle's bad handling of MySQL and the increasingly greedy SQL Server licensing terms.
Windows Phone. Again, an easy one. A pity, because more competition is always good. As with Winodws 8, it remains to be seen if Microsoft can -or want to- rescue this product from oblivion.
Will there be any 2013 predictions now that we're in February?
On reflection, some of these predictions were quite easy to formulate, if somehow against what the general consensus was at the time. That's why there is likely not going to be 2013 predictions. I still firmly think that Windows will go niche. That is happening today, but we have not yet reached the "Flash is no longer relevant" tipping point. You'll know that we've arrived there when all the big name technologists start saying that they were seeing it coming for years. But they have not started saying that. At least yet.Anyway, this prediction exercise left my psychic powers exhausted. Which is to say, I don't have that many ideas of how the technology landscape will change during 2013. So as of today, the only prediction I can reliably make is that there won't be 2013 predictions.
Developing Android applications with Ubuntu - II
It has been a few months since my latest post, and I've been quite busy with other interests during these times, but finally got some time to reflect and post a few updates.
Last time I wrote something, it was my intention to start playing around with Android applications.
Note that in this context, "applications" means software packages where the final user is also the one who is paying for the application. Enterprise packages can have notoriously bad user interfaces and people using these can complain as much as they want, but at the end they are being paid for using them, and unless someone can positively prove some productivity gains of a UI upgrade, these user interfaces will remain there now and forever.
Android applications fall squarely on the category where asking someone for money raises the level of expectations. Nowadays, the race to the bottom in pricing applications has left very little margin per unit sold. Very few Android apps cost more than 99 cents, the underlying idea is that you'll make it up what is lost in per unit margins by leveraging the sheer market size of the billions of Android devices and leveraging the sales volume. The end result is that for such low amount of money, the users are expecting polished, well designed, reliable and well behaved applications.
Compound that with the problem of market saturation. "There is an app for that" is a very convincing slogan, and is also true in the Android market. Almost all types of market niches for applications have already been occupied. It's very hard to think of an application that is not either already done well enough to occupy its niche or has enough free good enough alternatives that nobody is seriously thinking of making money selling one. There is always the ad-supported option, of course, but that is something that introduces a lot more uncertainty in the equation.
(now someone will say that the market saturation problem is only an idea problem, and will be probably right. Could be entirely my own problem not being able to come up with new ideas)
So far I've created very few things worth trying to sell, or even give away. But all is not lost, at least this experience has reminded me of an important fact that I have almost forgotten: developing applications is difficult. I mean, one gets used to look only at the server side portions of an application and analyzing them in detail, while essentially ignoring all the other components.
The phone development environment starts by throwing you back to the days of the past. Seemingly innocent development decisions have consequences on CPU and RAM usage that you're used to discard as transient spike loads on a desktop or server, but in those limited machines can make or break the difference between an usable application and one that the OS decides to close because it's taking to long to respond or too much memory to run.
What we take today for granted, such as dealing with different timezones (with different daylight saving time rules changing from year to year), different character sets and different localization rules are the results of lots of people working during lots of time, including doing such unglamorous things as standards committees. Those are amazing achievements that have standardized and abstracted huge portions of application specific functionality, but even so, they are only a small part of the scope that an application has to provide.
And let's face it, the most unpredictable, irrational, demanding and unforgiving component in any software application is the human sitting in front of it. In any application, even the trivial looking ones, there is a lot of user interaction code out there that has to deal with human events happening in crazy order, data entered in weird formats that is expected to be understood and business rules that have to match the regulatory landscape changes of the last fifty years or so.
Further proof of that: the number one category of security vulnerabilities is exploiting memory management errors (buffer overflows, use of orphan pointers) by... usually sending the application malformed input. This is not by accident, dealing with user input correctly is one of the hardest parts of creating a satisfactory user experience.
Let's not even add the regulatory compliance, audit requirements, the integration requirements with the rest of the environment -perhaps using those beloved text files- and the technical standard compliance and cross platform requirements.
All this adds up to a delicate balance between the user experience, the real world metaphors and processes being modeled and implemented, and the technical environment. And all this for 99 cents.
I'm not dropping completely the idea of selling some day an Android application, but it will have to wait for the right idea to come, and also for the necessary time to execute it properly.
There is also an emerging market for Android applications, one that is starting to surface and gaining momentum, as business adoption of Android and iPhones expands: the enterprise application, mobile version. Yes, expect some of these ugly use interfaces to be ported over to mobile platforms and likely this is the next big revenue source for mobile developers. And of course, I expect these applications to have performance issues, too.
But so far, my biggest learning is not with the ADK, Dalvik, ICS vs. Jelly Bean or Eclipse, for that matter. My biggest learning from all this is that there is a world of difference between focusing on a single area of an application and improving its performance or resource usage and delivering a complete application. That requires a different skill set. And after looking for a while at creating mostly toy Android applications, I'm glad that this experience has reminded me of all this. Too long living in the ivory tower can make you forget that these simple things are, in fact, quite complex.
Last time I wrote something, it was my intention to start playing around with Android applications.
Note that in this context, "applications" means software packages where the final user is also the one who is paying for the application. Enterprise packages can have notoriously bad user interfaces and people using these can complain as much as they want, but at the end they are being paid for using them, and unless someone can positively prove some productivity gains of a UI upgrade, these user interfaces will remain there now and forever.
Android applications fall squarely on the category where asking someone for money raises the level of expectations. Nowadays, the race to the bottom in pricing applications has left very little margin per unit sold. Very few Android apps cost more than 99 cents, the underlying idea is that you'll make it up what is lost in per unit margins by leveraging the sheer market size of the billions of Android devices and leveraging the sales volume. The end result is that for such low amount of money, the users are expecting polished, well designed, reliable and well behaved applications.
Compound that with the problem of market saturation. "There is an app for that" is a very convincing slogan, and is also true in the Android market. Almost all types of market niches for applications have already been occupied. It's very hard to think of an application that is not either already done well enough to occupy its niche or has enough free good enough alternatives that nobody is seriously thinking of making money selling one. There is always the ad-supported option, of course, but that is something that introduces a lot more uncertainty in the equation.
(now someone will say that the market saturation problem is only an idea problem, and will be probably right. Could be entirely my own problem not being able to come up with new ideas)
So far I've created very few things worth trying to sell, or even give away. But all is not lost, at least this experience has reminded me of an important fact that I have almost forgotten: developing applications is difficult. I mean, one gets used to look only at the server side portions of an application and analyzing them in detail, while essentially ignoring all the other components.
The phone development environment starts by throwing you back to the days of the past. Seemingly innocent development decisions have consequences on CPU and RAM usage that you're used to discard as transient spike loads on a desktop or server, but in those limited machines can make or break the difference between an usable application and one that the OS decides to close because it's taking to long to respond or too much memory to run.
What we take today for granted, such as dealing with different timezones (with different daylight saving time rules changing from year to year), different character sets and different localization rules are the results of lots of people working during lots of time, including doing such unglamorous things as standards committees. Those are amazing achievements that have standardized and abstracted huge portions of application specific functionality, but even so, they are only a small part of the scope that an application has to provide.
And let's face it, the most unpredictable, irrational, demanding and unforgiving component in any software application is the human sitting in front of it. In any application, even the trivial looking ones, there is a lot of user interaction code out there that has to deal with human events happening in crazy order, data entered in weird formats that is expected to be understood and business rules that have to match the regulatory landscape changes of the last fifty years or so.
Further proof of that: the number one category of security vulnerabilities is exploiting memory management errors (buffer overflows, use of orphan pointers) by... usually sending the application malformed input. This is not by accident, dealing with user input correctly is one of the hardest parts of creating a satisfactory user experience.
Let's not even add the regulatory compliance, audit requirements, the integration requirements with the rest of the environment -perhaps using those beloved text files- and the technical standard compliance and cross platform requirements.
All this adds up to a delicate balance between the user experience, the real world metaphors and processes being modeled and implemented, and the technical environment. And all this for 99 cents.
I'm not dropping completely the idea of selling some day an Android application, but it will have to wait for the right idea to come, and also for the necessary time to execute it properly.
There is also an emerging market for Android applications, one that is starting to surface and gaining momentum, as business adoption of Android and iPhones expands: the enterprise application, mobile version. Yes, expect some of these ugly use interfaces to be ported over to mobile platforms and likely this is the next big revenue source for mobile developers. And of course, I expect these applications to have performance issues, too.
But so far, my biggest learning is not with the ADK, Dalvik, ICS vs. Jelly Bean or Eclipse, for that matter. My biggest learning from all this is that there is a world of difference between focusing on a single area of an application and improving its performance or resource usage and delivering a complete application. That requires a different skill set. And after looking for a while at creating mostly toy Android applications, I'm glad that this experience has reminded me of all this. Too long living in the ivory tower can make you forget that these simple things are, in fact, quite complex.
Subscribe to:
Posts (Atom)