Unlike other years, where I wrote my "predictions" entry at the end of the year, this time I'm going to try to predict what is going to happen in a few technology areas in 2012. Oh yes, another not so database centric post. Well, it does have some database content. Read on.
First, I don't believe in the "the year of...." idea. At least applied to technology, it does not make much sense. There has not been a "year of Facebook", "year of Windows Server", "year of Lotus", "year of Office", "year of Novell" or "year of Google". We have never had a "year of Oracle", "year of SQL Server" or even a "year of iPad". True, some of these products have been very successful at launch time, reaching quickly a lot of popularity. Some people tend to take these launch dates as inflection points in tendencies, but they tend to forget how strongly have keep growing over time. No successful product I can think of -and correct me if I'm wrong- has been launched, ignored for a long while, and then boomed.
So don't expect these predictions to say "2012 will be the year of..." because 2012 will not be the year of anything. But in my opinion, 2012 will be the year when we will see some technologies emerging and others starting to disappear in the sunset.
MySQL share will go down. Oracle has failed to keep the hearts and minds of database developers. As a product, MySQL has nearly an infinite life span ahead, giving its huge momentum. But don't expect to be at the forefront of innovation. Unless Oracle as a company becomes something completely different from what Oracle is today, MySQL is going to remain the cheap SQL Server alternative, because everything else implies a threat to their other profit lines.
Java will finally start to lose momentum. Again, Oracle has to change a lot from what is today for this not to happen. From what I read about the evolution of the language, and the attempts to revive the ill fated JavaFX, Java is stagnating and becoming a legacy language. Notice I say momentum, not market share. During next year, less and less new projects starting from scratch will use Java, but that at the moment that is a small blip in the radar.
Windows Phone will have an agressive marketing push in 2012. Windows Phone will fail, crushed by the brand superiority of Apple and the massive spread of Android to... everywhere else.
Windows will become legacy. Yes, Windows 7 is not bad. Windows Server 2008 is not bad. But both are sandwiched in their respective niches. New client technologies (tablets, phones) are challenging the old king of the desktop. And in the server front, the combination of Cloud/SaaS growth and commoditization of basic enterprise services is challenging its dominance. Expect to see more and more integration with Active Directory trying to compensate for the lack of flexibility and higher costs of running your on premise Windows farms. Whereas the current Windows shops do not even question themselves if they should deploy new Windows servers or services, at the end of 2012, it will be customary to do so.
Speaking of Apple, 2012 will be the year when tablet manufacturers finally realize that they cannot compete offering something that is not quite as good as the competition but at the same price. So we'll hopefully see new products that offer innovative features while at the same time are -gasp- cheaper than the Apple equivalents. By the way, Apple will continue to be the stellar example of technology company, money making machine, marketing brilliance and stock market darling at the same time.
PostrgreSQL will increase market share. Both as a consequence of its own improvements, which make it more and more competitive with high end offerings, and because of Oracle not managing well its MySQL property, PostgreSQL will become more and more a mainstream choice. Many think it is already.
JVM based languages will flourish. While Java as a language is stalling, alternative languages that generate JVM bytecode will accelerate growth in 2012. The JVM is mature, runs under everything relevant from Windows to mainframes, and is a stable enough spec that nobody, even Oracle, dares to even touch. This, together with the tons of legacy code you can interface with, makes the JVM an ideal vehicle for developing new programming languages. Seriously, who wants to implement again file streams, threads or memory mapped files?
Javascript will become the Flash of 2012. Mmmm... maybe this has already happened since Flash has already retreated from the mobile front. Yes, Javascript is not the perfect programming language. But it is universally available, performs decently, and together with the latest HTML specs allows for much of what Flash was being used in the past.
NoSQL will finish its hype cycle and start to enter the mainstream stage. Instead of a small army of enthusiasts trying to use it for everything, the different NoSQL technologies will be viewed with a balanced approach.
The computer security industry will be in the spotlight in 2012. Not because there is going to be a higher or lower number of security related incidents next year, but because as an industry, computer security has expanded too far with too few supporting reasons beyond fear and panic. Forgive my simplification, but currently computer security amounts to a lot of checklists blindly applied, without rhyme or reason. Much like in real life, security needs to go beyond the one size fits all mentality and start considering risks in terms of its impact, likelihood and opportunity costs. Otherwise, be prepared to remember a 20 character password to access your corporate network.
Oh, and finally, and in spite of all the fear mongering, the world will not end in 2012. You will be reading these predictions a year from now and wonder how wrong this guy was.
Sharing real world experiences on database tuning. A place to think about and discuss database performance tuning. Have fun.
Thursday, 29 December 2011
Wednesday, 16 November 2011
Unity and the mismatch of user interfaces, or how I learned to hate the overlay scrollbars
During my first years of Linux, I switched between KDE and GNOME at the same time as I switched distributions, or more exactly, as each distro had a different default desktop environment. Later on, I began switching when each desktop environment leapfrogged others with new fancy functionality.
Then, some time ago, I settled on KUbuntu, and keep using it for my day to day desktop users. I'm perhaps not a typical KDE user, because I use many non KDE alternatives as standard applications. I don't use KMail or any of the semantic desktop functionality. My default browser is Chrome/FireFox, my mail client is web based, and I use GIMP to retouch photos. This is not to say that the KDE Software Compilation apps are bad -try them and you'll see that they are in fact quite good- just that I'm more used to the alternatives.
However, when I got a netbook, I tried KDE and found it too demanding on screen real estate to be comfortable to use, so I installed Ubuntu with the default GNOME 2 desktop on it. The machine ran 10.10 perfectly, and I did not feel the need to upgrade or change anything.
However, when I got a netbook, I tried KDE and found it too demanding on screen real estate to be comfortable to use, so I installed Ubuntu with the default GNOME 2 desktop on it. The machine ran 10.10 perfectly, and I did not feel the need to upgrade or change anything.
We, KDE users, had to endure a couple of years ago the difficult transition from KDE 3.5 to KDE 4. The KDE 4 team had a very hard time explaining to its users the reasons for the change. As I understood it, they were rewriting the KDE internals in order to clean up the code base and be able to implement existing features better and allow for evolution of the desktop environment without carrying over difficult to maintain legacy from the 3.5 code base. For users this was difficult to understand, since the changes in the desktop environment required also changes in applications. Which mostly meant that existing applications were either not available, or were not on par feature wise with their 3.5 equivalents at the time the version 4 was released.
Two years have passed since that traumatic 3.5 to 4 transition, and the pain is over. KDE 4.7 is on feature parity with 3.5, and is regarded as one of the most elegant and configurable desktops. Certainly is not the lightest, or the least intrusive. But you have to agree that you can change almost anything you don't like using the KDE control panel to suit your tastes.
This is to say that I've been mostly an spectator in the Unity/GNOME 3 debate. That is, until I decided to upgrade Ubuntu in the netbook.
I read a lot about Unity, and was prepared for a different desktop interface. I read a lot of angry comments targetted at Unity, but honestly I did not gave much credibility to them. In the land of Open Source, everyone is entitled to have their own opinion, and there is always a segment of users that reject changes. Happens always with any kind of change. For these whose work environment is perfect after years of tweaking and getting used to it, anything that tries to change that, even for the better, is received with angriness and noise.
I was not prepared for the shock. Unity is a radical departure from the previous GNOME 2 desktop. It's not only radical, it is also trying to go into many completely different and conflicting directions. Let me explain.
Most desktop environments, not only KDE and GNOME but also Windows and even Mac, have been disrupted by appearance of the touch based devices. Using your fingers on a screen is completely different than using a mouse, either stand alone or via a touch pad. Fingers are less precise, if only because a mouse arrow targets an area that is a few square pixels. Fingers are also much faster to move than the mouse over the input area, and you can use more than one at the same time, instead of being limited to the one to three mouse buttons.
Touch devices need a different user interface metaphor, one based on... touching instead of one based on pointing. This has became evident with the success of iPhones, iPads and Android based devices. Note that touch interfaces can, or perhaps should, be markedly different depending on the screen sizes, because the different ratio of screen size vs. human hand.
What does not work well is trying to mix the two metaphors. Touch and point based devices have different usage patterns, and different constraints. Trying to have a user interface that is efficient and ergonomic with both devices at the same time is simply impossible. It is like trying to have the same interface for switching gears in an automobile versus a motorbike: yes, you can build something that can be used in both contexts. But no, it will not be optimal in two at the same time.
Unity is such an attempt, and one that fails to be efficient with any input devices.
The Launcher
In the past, you pressed the "Start" button on the bottom (or top) of the screen and you were presented with a set of logically organized categories to choose from. Or you could type a few letters of what you were searching for and find the program you want to execute. No more. You now have a bar on the right side of the screen with a row of icons, whose size cannot be changed, that in theory represent the programs that you use most. This bar is on the side in order to not take space out of the precious screen height, which in a netbook is usually small. Well, at least something good can be said about the launcher.
Ah yes, we can always use some keyboard shortcuts to switch between applications. Another usability triumph, I guess.
I read a lot about Unity, and was prepared for a different desktop interface. I read a lot of angry comments targetted at Unity, but honestly I did not gave much credibility to them. In the land of Open Source, everyone is entitled to have their own opinion, and there is always a segment of users that reject changes. Happens always with any kind of change. For these whose work environment is perfect after years of tweaking and getting used to it, anything that tries to change that, even for the better, is received with angriness and noise.
I was not prepared for the shock. Unity is a radical departure from the previous GNOME 2 desktop. It's not only radical, it is also trying to go into many completely different and conflicting directions. Let me explain.
Most desktop environments, not only KDE and GNOME but also Windows and even Mac, have been disrupted by appearance of the touch based devices. Using your fingers on a screen is completely different than using a mouse, either stand alone or via a touch pad. Fingers are less precise, if only because a mouse arrow targets an area that is a few square pixels. Fingers are also much faster to move than the mouse over the input area, and you can use more than one at the same time, instead of being limited to the one to three mouse buttons.
Touch devices need a different user interface metaphor, one based on... touching instead of one based on pointing. This has became evident with the success of iPhones, iPads and Android based devices. Note that touch interfaces can, or perhaps should, be markedly different depending on the screen sizes, because the different ratio of screen size vs. human hand.
What does not work well is trying to mix the two metaphors. Touch and point based devices have different usage patterns, and different constraints. Trying to have a user interface that is efficient and ergonomic with both devices at the same time is simply impossible. It is like trying to have the same interface for switching gears in an automobile versus a motorbike: yes, you can build something that can be used in both contexts. But no, it will not be optimal in two at the same time.
Unity is such an attempt, and one that fails to be efficient with any input devices.
The Launcher
In the past, you pressed the "Start" button on the bottom (or top) of the screen and you were presented with a set of logically organized categories to choose from. Or you could type a few letters of what you were searching for and find the program you want to execute. No more. You now have a bar on the right side of the screen with a row of icons, whose size cannot be changed, that in theory represent the programs that you use most. This bar is on the side in order to not take space out of the precious screen height, which in a netbook is usually small. Well, at least something good can be said about the launcher.
Ah yes, we can always use some keyboard shortcuts to switch between applications. Another usability triumph, I guess.
Now, try to tell a novice how to find what he wants there. You cannot. Instead, you explain that if those icons have a tiny ball on the left side, means that they are applications that are being executed. The distinction on how to launch a new empty window document and how to switch to a running instance is very small, in fact a few pixels small. It's a lost battle to try to explain the difference between creating a new document in LibreOffice using the File->New command versus launching another LibreOffice instance.
Given that there is no simple way of finding what you want to execute unless it is in the first six or seven icons, you tell the novice to press the home button on the top left of the screen.
And good luck there, because something called a "Dash" appears there, which is a window that lists programs. The dash shows oversized icons of the most frequently used applications with four small icons that represent application categories at the bottom. It's up to the novice to figure out what those categories mean, and to find anything there. Of coutse, the novice can type a few letters to search the dash. Depending on how well localized Ubuntu is, he/she may be lucky and find a mail client, or a web browser, or a photo viewer. Or not.
And good luck there, because something called a "Dash" appears there, which is a window that lists programs. The dash shows oversized icons of the most frequently used applications with four small icons that represent application categories at the bottom. It's up to the novice to figure out what those categories mean, and to find anything there. Of coutse, the novice can type a few letters to search the dash. Depending on how well localized Ubuntu is, he/she may be lucky and find a mail client, or a web browser, or a photo viewer. Or not.
The window title bar
One of the most important aspects of any kind of interface design, not only user interface, is to be consistent. The Unity window title bar breaks inconsistency records in that area. When it is maximized, it shows the application name, except when you hover the mouse over it, when it magically changes to show you the window menu. Of course, if the application window is not maximized this is different. If our novice has not yet given up using Unity, he's going to be asking "where is the application menu" in a matter of seconds. That is, assuming that he or she discovers how to maximize or minimize windows, which probably will make you go back to the explanation about tiny little blurbs on the launcher because when you minimize a window, it literally disappears except for the little blurb that tells you that is still executing.Overlay scrollbars
I cannot believe that someone that uses a computer for handling documents that are routinely longer than a single page can find these scrollbars convenient. Certainly I cannot find anyone. The scrollbar as we know today (before Unity, that is) has been an incredibly simple metaphor that novice users could understand without explanation. Now try to explain these overlay scrollbars to a novice.
All this inconvenience is introduced in order to save something like 5% of a maximized window width. Here is a message to the Unity user interface designers: it is not worth doing it, because for documents that fit in the window height you can simply hide the scrollbar, and for those that are longer it's better to have an effective navigation device than having to deal with such an oddity.
And please, do not remind me that we can always use the keyboard. Because it is true, but we are talking about usability, right?
Lack of customization
All this is the default behavior. Being a Linux user, you may think that it is just a matter of finding where the configuration dialog is and start changing those odd default settings. Here are the good news, there are no options to change most of that, lets the novice become confused with too many options. In the end, I find this the most sensible choice for Unity: your users are going to be so confused by the user interface that it's best to hide anything else to prevent them becoming distracted from learning the new Unity ways of doing things, which is going to consume most of their mental energy.Maybe there is something good about incoming Unity releases. I'll have to find it on a VM, because there is no way that I'm going to use it on my desktop, laptop or netbook. Which is now running Mint, by the way. Of course, with the overlay scrollbar package removed.
P.S: this post comes with tremendous respect for the Unity and Ubuntu developers. I know that creating a good user interface is incredibly hard. And I know that lots of time and resources have been invested in Unity in a well meaning attempt to create something different and better. I just can't understand what mental processes have been in place to allow for Unity to see the light in its current state.
Saturday, 15 October 2011
Thursday, 29 September 2011
Tuesday, 20 September 2011
Thursday, 8 September 2011
Thursday, 9 June 2011
Saturday, 4 June 2011
XSLT Transformation to analyze MSSQL Server 2005 traces
During the final user validation of a new packaged application, we were getting complaints of bad performance, and the usual suspect, the database, was apparently perfectly well. This is where the SQL Server 2005 trace facilities shine. You can trace what is going on inside the database engine with quite a high level of accuracy, including filtering the "noise" generated by other users, other application components, or yourself.
After running a trace while users were exercising the application, I became completely convinced that the database was not at fault. However, I wanted to put my conclusions on a paper, and for that there is no replacement for issuing a formal report. Problem was, to create that document you need to take the trace data and use it to generate charts, rankings and whatever you need to support the conclusions.
This was going to be easy, just import the trace data into a spreadsheet and play around with it. Let's start with the trace data itself. The first thing you need is to save it on a file. The SQL Server trace tool only gave me the option of saving it on its own proprietary format, intended to be used by the tool itself, or use XML.
I choose XML, after all this is something close to a universal standard, isn't it. Yes, it is. I took my XML file and tried to import it in Excel. Only to discover that Excel was expecting basically a "flat" XML file, where the rows of your dataset are children nodes. SQL Server 2005 trace format is not like that. SQL Server 2005 uses a generic "Column" node with an identifier and a name.
No problem, I said to myself. This is XML, so a simple XSLT transform should get me the data in the columnar layout that Excel expects. Simple, right? So simple that I tried a couple of Google searches before convincing myself that either SQL Trace tool is not that popular or simply nobody had wanted to do this before.
And in the end, simple it was. Many ages ago I wrote quite a few XSLT transformations, so I was familiar with what I wanted to do and how to do it. However, memory fades quickly when you don't touch a subject in a while. And there are always new things to learn, which is to say that there are surprises around the corner, even when you use something as standard and plain vanilla as XLM. In the end, it took me the best part of four hours to get this right.
So much time over what I expected that I'm documenting it here for future generations and my lousy memory as well. And because to be honest, I expected that someone had already done this.
First, the XML output of the SQL Server trace tool contains an innocent looking line at the beginning:
<tracedata xmlns="http://tempuri.org/TracePersistence.xsd">
It took me a while to discover that my XSLT processor was trying to fetch that XSD and failing to walk beyond the root node due to silently failing at that. So the first thing you need to do is to edit the trace file and leave it as:
<tracedata>
Almost there. Then you create an XLST file with this (sorry for the lack of indentation, I don't want to spend too much time fighting with the Blogger way of rendering):
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<!-- Extracts the SQL Server trace data to columnar format -->
<!-- Author: consultuning@gmail.com -->
<!-- Last updated: 4/6/2011 -->
<xsl:output method="xml"/>
<xsl:strip-space elements="*"/>
<xsl:template match="/">
<Events>
<xsl:apply-templates select="TraceData/Events" />
</Events>
</xsl:template>
<xsl:template match="Event">
<Event>
<Name><xsl:value-of select="@name" /></Name>
<xsl:for-each select="Column">
<xsl:variable name="ColumnId" select="@name" />
<xsl:element name="{$ColumnId}">
<xsl:value-of select="." />
</xsl:element>
</xsl:for-each>
</Event>
</xsl:template>
</xsl:stylesheet>
And use your favorite XSLT processor for generating the transformed XML:
xsltproc [xslt file] [xml trace file] >[output XML file]
And that's all. Well, you'll get some data truncation warnings from Excel if the traced SQL statements are too long, but overall you'll be able to import this into Excel without problems.
The worst part of all of this is, after looking at the trace I discovered that there were some users having seriously bad response times. But that is the subject of another post.
Tuesday, 5 April 2011
Advice for spammers
I must confess: I don't know how most of the internet based business works. But from what I read, it seems that "pagerank" plus "traffic" is all you need to earn tons of money without actually doing much of value. There seems to be whole operations, called search engine optimizers(*) (or SEOs) devoted to changing a site or a set of sites so that they appear in the top Google search results.
How it works, I'm not sure. And it has not interested me too much in the past. From what I understand so far, these SEOs trick Google into displaying in its top results links to sites that are merely a collection of ads, plus some content literally ripped of from somewhere else. The idea is to leverage the enormous scale of the internet so that even if only a small percentage of users click on your ads, you get a respectable amount of cash in reward of your hard labour. If you're thinking of how spam mail works you're right, this is exactly the same logic. One gullible user in a million is likely enough to pay for all this in ad revenue.
If that's not fraud, it is pretty close. At the very least, they are polluting Google search results and giving its customers less useful results only to get a slice of that valuable traffic and ad revenue and probably violating a few Google terms and conditions.
The problem is, the Internet is so huge that even a company with the resources of Google cannot completely stop this. More so, Google is always striving to eliminate manual intervention in anything in as much as possible. Which is something that creates a never ending arms race between fraudsters and Google. Entire sites and organizations are devoted to second guess what the latests updates to Google ranking algorithms means for their customers so that they can keep appearing at the top search results.
I wish someone with the actual resources to quantify it could estimate the cost of this kind of activities. Probably the sheer infrastructure costs of those operations, plus the time wasted by Google users on useless search results would be more than enough to end poverty in a couple of countries. Each year.
Sorry, going off topic here. And flawed logic: while poverty could be eliminated from a country, how SEOs would pay the rent? The idea will shift poverty from one place to another, with the only hope of getting a more valuable return from one group than the other. Which opens another completely different debate that this blog is not really prepared to enter.
What is really interesting to watch is the arms race. Each time Google refines its ranking algorithms trying to defeat the
One of the ways of increasing your relevance has been, since the original PageRank publication, the number of sites that contain links to another site. Over time, Google has changed the original algorithm, sure, but there are
As blogger allows commenters to enter a URL for their site, there is really no other way than
But I keep thinking that some of them could actually be relevant and useful. So let me clearly explain the guidelines for comments being accepted on this blog.
- Comments should only absolutely agree with the points made in the post. The purpose of the comment is only to praise on the following areas:
- How clever, clear and structured the blog post is.
- How the post is magically synchronized with the hot topics in the industry.
- How appropriate and relevant to the problem the reader has at hand. In its infinite wisdom, the blogger always chooses the topic that is most interesting to the reader in this precise moment of its existence. Both of course for personal and professional dimensions of life.
- Any dissenting comments will, shortly after posting, realize their enormous mistake and post immediately an apology that summarizes the qualities emphasized in the first rule.
- Logic arguments are allowed as long as don't conflict with the first rule.
- Passionate discussion is allowed as long as it supports what is stated in the first rule.
Remember, anything not following these rules is automatically flagged as spam.
Before the flames start yes, I know, there are people out there that call themselves SEOs that are not interested in making you jump to a site at any cost. Yes, there are SEOs that just make sure that your site is correctly structured so that search engines can index it really well. But compared to the other kind of greasy SEOs, they are in minority. Proof? Quick, what is the average signal/noise ratio of your latest Google search? How many times have you given up on jumping link after link ending up in different pages that contain exactly the same text? My advice for these "white hat" SEOs: find another name for your profession. SEO is becoming quickly an undesirable term for a resume.
(*)See, I know so little about how internet business works that perhaps SEO does not actually mean that. Maybe its short for "Search Engine Organizers". Or "Search Engineer Operations"
Sunday, 27 February 2011
Wednesday, 23 February 2011
Is your privacy worth 100$?
Suppose you come across some nice guy on the street that makes you the following offer:
Being usually one kind of interrogative person, you’ll naturally ask back, of course, how much will you pay me?
This is basically what consumer panels have done for years. An old tool in the market analytics since a lot of time, consumer panels have been used by marketers to try to understand how people behave, react and interpret things. The biggest problem with consumer panels, besides costs, is I think what is called the "documentary effect": people usually act differently than they say. This term comes from media panels, because you're more likely to say that you watched an interesting film about quantum mechanics than admit that you were engaged with the latest "Got Talent" edition.
Now, put down in a piece of paper what you consider a sensible amount to charge for this information. Do it before you read the rest of the post.
Then suppose you came across some nice guy on the street that makes you the following offer:
Being usually one kind of interrogative person, you cannot resist and ask: how much all this will cost me?
Now, put down in another piece of paper what you consider a sensible amount to pay for this. Forget for a moment the plethora of services that provide this for free and force yourself to write a number.
Then take the first and second papers and put one besides the other. Now take the first paper and write below your number: 100 USD.
I'm not sure if the number is fair or not. Perhaps it's not too much, after all. That’s what this information is worth for Facebook(*). If I was explicitly asked to sell my personal information, I'd consider 100 dollars a very low price.
I haven't seen any Facebook business plans, but I think it's not crazy to assume Facebook business model revolving around giving a way for marketers to advertise with scalpel-like precision. The cost difference between what a consumer panel, with all its lack of precision costs, and the maximum 100 USD Facebook per user value is a dream come true for the marketer, especially since you're avoiding documentary effects.
And for Facebook, after operational and capital expenses, the difference is their profit.
And what's wrong with this? After all, Google is giving away excellent services for free in exchange for the privilege of being able to snoop your data. On paper, Facebook just moves this idea a bit further.
My only concern with all this is, what do you get in exchange for all those profits FB will collect? The ability to send messages and upload photos? Give me any day a Facebook like where I could, with a one off payment of 100 USD, do exactly the same that can be done with Facebook.
I'd pay them. Assuming of course that I was interested in using FB-like services. Which I'm not. But that's another story.
(TL;DR and advice for prospective clients: no, I have not a FB page, profile or anything like that. Nor I feel the urge to do so. And I'm sure that FB owner is a nice guy bad and this post was not meant to insult people that enjoy FB)
(*) Estimation based on 50 billion USD valuation divided by the estimated 500 million active users.
Hey, man, you really look like a nice person. Can you give me your list of friends, tell me where you are during the day, tell me what your opinions are in related to topics of my choice and generally tell me what/when/if you like or not things I ask about? I’ll get this information from you during your entire life, and perhaps I may want to share it with a few selected partners. You’ll have no control of who I partner with and which pieces of information I will share.
Being usually one kind of interrogative person, you’ll naturally ask back, of course, how much will you pay me?
This is basically what consumer panels have done for years. An old tool in the market analytics since a lot of time, consumer panels have been used by marketers to try to understand how people behave, react and interpret things. The biggest problem with consumer panels, besides costs, is I think what is called the "documentary effect": people usually act differently than they say. This term comes from media panels, because you're more likely to say that you watched an interesting film about quantum mechanics than admit that you were engaged with the latest "Got Talent" edition.
Now, put down in a piece of paper what you consider a sensible amount to charge for this information. Do it before you read the rest of the post.
Then suppose you came across some nice guy on the street that makes you the following offer:
Listen, I have an irresistible proposal for you: see, I will give you the ability to stay in touch with your friends, send them mail messages and chat with them about your holidays, show them your photographs and allow them to write comments about them.
Being usually one kind of interrogative person, you cannot resist and ask: how much all this will cost me?
Now, put down in another piece of paper what you consider a sensible amount to pay for this. Forget for a moment the plethora of services that provide this for free and force yourself to write a number.
Then take the first and second papers and put one besides the other. Now take the first paper and write below your number: 100 USD.
I'm not sure if the number is fair or not. Perhaps it's not too much, after all. That’s what this information is worth for Facebook(*). If I was explicitly asked to sell my personal information, I'd consider 100 dollars a very low price.
I haven't seen any Facebook business plans, but I think it's not crazy to assume Facebook business model revolving around giving a way for marketers to advertise with scalpel-like precision. The cost difference between what a consumer panel, with all its lack of precision costs, and the maximum 100 USD Facebook per user value is a dream come true for the marketer, especially since you're avoiding documentary effects.
And for Facebook, after operational and capital expenses, the difference is their profit.
And what's wrong with this? After all, Google is giving away excellent services for free in exchange for the privilege of being able to snoop your data. On paper, Facebook just moves this idea a bit further.
My only concern with all this is, what do you get in exchange for all those profits FB will collect? The ability to send messages and upload photos? Give me any day a Facebook like where I could, with a one off payment of 100 USD, do exactly the same that can be done with Facebook.
I'd pay them. Assuming of course that I was interested in using FB-like services. Which I'm not. But that's another story.
(TL;DR and advice for prospective clients: no, I have not a FB page, profile or anything like that. Nor I feel the urge to do so. And I'm sure that FB owner is a nice guy bad and this post was not meant to insult people that enjoy FB)
(*) Estimation based on 50 billion USD valuation divided by the estimated 500 million active users.
Subscribe to:
Posts (Atom)