Dreams in Hollywood

On a recent trip to Los Angeles, the driver of the car that picked me up from the airport started talking about his life through the course of our journey. Having come to LA as an aspiring actor almost 30 years ago, he told me how he had dreamt big, and gotten parts in independent and mainstream movies, but always remained on the sidelines. Enough to generate hope that things would work out, but not enough to actually do so.

“I went to school with Helen Hunt. Do you know why she became famous?”, he asked.

Before I could really say anything, he continued.

“Because her father, Gordon Hunt, is a famous actor/director” [known primarily for The Jetsons and Scooby Doo]. “She wasn’t really the prettiest one in the class, and there were definitely more talented people than her. But she got ahead.”

Of course, I have no way of confirming whether he indeed went to school with her, but it’s definitely a plausible story. Wealth and power always beget more wealth and more power. Why not indeed?

The thing that got me thinking was the fact that 30 years later, his once active career has dropped in scope to driving limousines for a living. He mentioned that he enjoys his job, but ultimately given a choice, it’s not what he’d be doing. How then does one realize when to give up? At what point could he have turned around and said – the acting gigs aren’t really going anywhere, perhaps I should try and develop an alternate career? Is there a framework that people have that allows one to take such a major decision? Would it have helped him lead a more fulfilling life than the one he does today?

 

Making sartorial statements as an engineer

Silicon Valley prides itself on being a spearhead in providing amazingly flexible work environments to its employees. There are barely any established timings to get in and out of work, lunch jogs are du jour , working from home is a luxury given to most people, appearances aren’t important as long as they’re not offensive, and people are judged by not how they dress but what they achieve. The epitome of meritocracy, one might say.

So why don’t engineers pay more attention to how they clothe themselves? (Just to be clear, I’m talking about men here.)

Unlike the social interactions that make up most of a sales or marketing role in any organization and thus demand non trivial thought being given to one’s sense of dress, there is really nothing more for an engineer to do (in between a spate of meetings) than to sit down and actually start building something. Which means they can dress as comfortably as they want, without the stress of needing to decide what to wear everyday. Rather than take this as a blank cheque to dress “down” into the apocryphal hoodies and t-shirts, why not take this as a challenge to creatively explore their individual sense of style? What better circumstances to develop and perfect this highly personalized sense of self expression than in an environment that doesn’t penalize you if it doesn’t always work?

One of the things that always irks me is the perception that most people have – real or imagined – of engineers being unable to fend for themselves when it comes to making a sartorial statement. The media as usual does an amazing job in portraying the typical inarticulate, t-shirt-hoodie-torn jeans wearing programmer who works out of a windowless basement, with only code, soda and perhaps some science fiction keeping him company. Even Obama has gone on record making a dig at Mark Zuckerberg’s rare use of a jacket and tie. But is this stereotype really warranted?

Building software for the most part is a very solitary endeavor, and requires equal parts creativity, analytical skill and an eye for detail. I think of it as an art – one that compares most favorably to writing a novel, the creation of a painting or even the composition of a symphony. And as befits any artist, engineers take pride in their craft – there are umpteen blog posts and talks about the elegance of a certain algorithm or the simplicity of a piece of code that achieves something complex. Why then don’t more of them take pride in presenting themselves?

I don’t disagree that there’s a level of creativity involved and some people may not consider themselves well suited to pulling together various items of clothing in an aesthetically pleasing manner. But in this day and age, it’s really not that hard to find a host of resources on the web that offer everything from helpful tips to services that help you overhaul your wardrobe!

Software has been “eating the world” for a long time now, but it is the relatively recent focus on combining it with good design that its use becomes increasingly prevalent in every aspect of life. Practitioners of the art of making software (or hardware for that matter) should really stand up and take their place front and center of the well dressed world, and get rid of the stereotypes that have dogged them for years. After all, they are not sitting down in a basement all by themselves any more – they are under the lens of the entire world as it watches them create the next wave of innovation.

And they might as well pose for the camera in a tweed sports jacket and sharp shoes than a company hoodie and flip flops.

Can there be an algorithm for creativity?

Artificial Intelligence has always been one of the most fascinating aspects of computer science. With devices getting smaller, the cloud more ubiquitous, and processors becoming more powerful, we are starting to find ourselves in the beginning of an era where machines will get progressively smarter. Apple’s Siri, Google Now, and Microsoft’s Cortana are but some examples of teething intelligent assistants that we all hope will one day become sentient, autonomous and (hopefully) friendly sidekicks that make our life easier.

The creation of software such as this has been made easier with the development of what is today called Deep Learning – an advancement in the field of machine learning where sophisticated neural networks are being trained with more and more data to learn behaviors, recognize images, understand speech and respond in natural language to domain specific queries. How these systems work is pretty simple to understand.

Consider the task of understanding and recognizing the contents of an oil painting which portrays a woman standing in a park. The intelligent system could start by analyzing the smallest possible chunks of the painting – which are called features – in this case, a series of dots of paint, organized by color. It might then do a second pass to recognize higher dimension features, and try to determine interconnections between these dots to determine which ones constitute a line. The third pass might analyze the contours of the known lines, and determine shapes – circles, squares, and other irregular ones. If the system has been trained to map a known set of shapes to real life objects, it may be able to in the next pass perceive an eye, a nose, a tree and other objects that make up the painting. And lastly, it may put these things together to understand that two eyes and a nose make up a human – so this must be one, and that trees with grass and butterflies are generally found in a park, so this must be a park.

And so it goes, from being able to detect the contents of a painting to figuring out which videos on Youtube have cats in them.

But a missing component of real intelligence is creativity, one definition of which is the ability to come up with unique and novel explanations for events and occurrences that can not always be explained by observing the past. As David Deutsch writes in his excellent piece on AI in Aeon magazine,

I myself remember, for example, observing on thousands of consecutive occasions that on calendars the first two digits of the year were ‘19’. I never observed a single exception until, one day, they started being ‘20’. Not only was I not surprised, I fully expected that there would be an interval of 17,000 years until the next such ‘19’, a period that neither I nor any other human being had previously experienced even once … How could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow?

All of the programs that have been written so far have had the ability to compute of course. Increasingly, they have even had the ability to know – to store knowledge and intelligently answer questions about things. But none of them, so far, have been able to want. What then would it mean for a program to not go as far as to be called sentient or alive, but be merely creative?

One way to answer that would be to define creativity as the process of finding relationships between facts from different branches of knowledge. As these facts become more and more removed from each other, the larger is the impact of such a connection being made. While we may want much of this relationship-creation to happen in some orderly fashion, the reality is that most of this happens in a very serendipiteous manner. Is it then possible to simulate this accidental fact correlation in a program?

If we analyze a deep learning program trying to identify features for a given task, we will see that its probabilistic nature means it is bound to commit mistakes. There will be times when it throws up a false positive – identifying an object as something that it is not. Most times, the evaluator will mark this as a “negative example”, in order to make sure that this particular mistake is not repeated the next time a similar case is seen.

But what if this is how a machine makes a mistake? After all, this is no different from an artist making an errant daub of paint on their canvas and realizing that it actually fits in well in a way they never imagined before. Could we then train a system to take these mistakes and channel them into a positive feedback loop that improves the tasks this machine was doing, especially if it were creatively inclined like producing a piece of music, analyze some art, or even paint a piece of art?

 

Religion and the curious death of free speech

Religion, if you look at it, is ultimately just an idea. One, admittedly, that has the support of a large number of people. But as an idea, it is by itself not sacred. The illusion or reality of its sacredness is driven wholly by how it is perceived by those who believe in it and put it on a pedestal. Unlike people however, an idea doesn’t have any rights. When someone has a very emotional response to an attack on an idea they believe in, it does not in return give them the right to defend it with violence. They may at best have the opportunity to provide counter arguments, engage in constructive criticisms of the discrepancies in their opponent’s claims, or even completely disconnect themselves from said opponents. But never through violence.

Whenever the concept of free speech is mentioned in most contexts, religious or otherwise, Voltaire’s quote is pretty hot property. To defend till death someone’s right to express themselves even if you didn’t personally agree with them? A remarkably romantic concept. If death is the ultimate price to pay to protect someone’s freedom of expression though, as the Charlie Hebdo attacks so tragically illustrated, why go down the road of even provoking someone to the point that they would bay for blood? What’s wrong with just expressing opinions that don’t hurt anyone? Shouldn’t the creatively inclined have an unofficial motivation – or even an official regulation – that should make them responsible for their content, given the volatile times that we live in and where trigger happy adherents are more than eager to mow down anyone mocking their ideas?

Questions such as these have resurfaced time and again in my mind, in the Indian context and otherwise. As a country that purportedly guarantees the freedom of speech to its citizens, it has for a very long time – due to action and inaction – diluted the ability of its constitution to do so.

I have always found it surprising that whenever a major event happens at the world stage, Indian leaders are almost never quoted in the many statements of consternation or condolences that eventually get broadcast around the media. Imagine my surprise then at the spate of statements from not only the government, but also the Hindu right wing against the Charlie Hebdo attacks. Because in that week, an attack on free speech that was barely covered by the world media was going on in Tamil Nadu in southern India and had some wide ranging repercussions.

Because of incessant threats against him, with his books being publicly denounced in the streets by right wing Hindu groups,  and with no governmental organization supporting him, one of India’s most revered Tamil language authors, Perumal Murugan resigned from writing and went back to being a school teacher. Not only that, he took the rather extreme steps of having copies of his books recalled, asking literary festival organizers not to invite him, his readers to dispose of the copies they owned, and publishers to stop printing any more copies. Apart from the usual buzz in the liberal media, this event went mostly ignored – and the petitions filed by none other than the ruling BJP and its allies resulted in a complete shutdown of the author’s hometown for a week. And not a single statement was heard from the government about it.

Romila Thapar, the famous Indian historian put it very eloquently last year,

It is not that we are bereft of people who think autonomously and can ask relevant questions. But frequently where there should be voices, there is silence. Are we all being co-opted too easily by the comforts of conforming? Are we fearful of the retribution that questioning may and often does bring?

Why then should we defend the deliberately provocative?

Because ultimately, if we want to live our lives in a meaningful manner, in a way that we are not censored, our voices are not muffled or our actions impinged upon, we must all have the ability to provoke. And a mindset to rationally support others who do.

Because if we do not, then the rights and freedoms that we have left will be throttled even further than they already have by the treacherous cocktail of state actors, non state actors and the skirmishes between them. It is important to understand that religion, just like other human ideas, is open to satire. That it is dangerous to put any icons, institutions or figures that are ultimately a human creation, over and above satire.

Respect should be accorded to them yes – but not at the expense of the liberties and rights that are accorded to human beings.

An Armenian vignette

If there are places we call third world countries today, then I would call Armenia a fourth world country

As our jet black GMC Yukon sped through an interchange to get on to the I-405 in Los Angeles yesterday, those were the words my Uber driver said with all the passion he could muster in response to my question about how his home country had turned out almost two decades after he left.

Armen Nazaryan* was born and grew up in the city of Yerevan, the capital of Armenia. He worked as a tailor, a plumber, and then a jeweler before he moved to Los Angeles in the early 90s, starting life afresh in the jewelery business. Educated throughout his life in Russian, the lingua franca of the then USSR, he has barely managed to dislodge his pleasant but thick Russian accent in all his years in the US. As he says of the Armenian community in LA, which also happens to be the second largest anywhere in the world after Armenia itself,

“It’s entirely possible to live in Glendale and never speak any other language but Armenian all your life”

He has been very happy in his adopted country, and when I asked him if he’d ever go back, he talked at length about why not.

“Ever since the USSR collapsed in 1991, all the power and money in the country has been concentrated in the hands of maybe a dozen families. The entire economy is corrupt, with kickbacks and bribes ruling the roost. I feel sorry for my country – it used to be very nice when I was growing up – but today, I would never want to live there. My life is here. I can speak my language, Jons [the grocery store] has traditional Armenian foods, and I earn good money here. Why would I go back into that mess?”

As we started talking about his experiences driving for Uber, he described to me his favored algorithm to optimize for rides in the city.

“In the evenings close to dinner time, I park near high end restaurants. Once that crowd finishes, I head to clubs that I know are open late – not all of them, just the few that have clientele leaving them all night. And once that is done, I go to the top hotels which generally have people heading to airports early in the morning. I make good money doing this”.

The obvious question in my head was of course – why, as a trained jeweler, was he driving full time for Uber? As curiosity got the better of me, I decided to ask why before the ride ended.

“Ah”, he said, after a pause that had warning bells going off in my head about whether I’d asked too personal a question.

As I started to offer an apology, he started to talk and so I shut up.

“I was doing well in the jewelry business, with clients around LA and New York. My main client though, who had 90% of my business, was in New York on 9/11 and was lost when the WTC fell. I lost everything then, and couldn’t recover”

Not knowing what to say, I stayed silent.

“But it’s okay you know? Life throws all kinds of punches at you. We must all absorb them and continue forward – otherwise, where would we be?”

*name changed

Practical applications of Y Combinators in Clojure

When the term Y Combinator is mentioned in most technology circles today, the image it usually conjures up is of the well known startup incubator with their orange logo and a list of famous funded companies. There’s a very interesting reason they chose this name, and their FAQ page 1 gives some context on why they did so,

The Y combinator is one of the coolest ideas in computer science. It’s also a metaphor for what we do. It’s a program that runs programs; we’re a company that helps start companies.

Which of course begs the question –  why are Y Combinators so cool and are there any real reasons to know about them?

This post aims to provide insight into some practical applications of Y Combinators, and I highly recommend reading some of the following resources on the topic that do a great job in articulating the theory behind them. 2 3 4

A gentle introduction

Let’s assume we want to write a recursive function that does something useful, but with a catch – we don’t want to invoke this recursive function directly from within itself. Why would we want to do this? Multiple reasons might force us to – perhaps we’d like to avoid stack overflows, or perhaps there are other deeper reasons which are out of the scope of this post. 5 6. Irrespective – how then would we go about doing so?

In other words, the problem we’re trying to solve is to “introduce a self reference to the function, without a direct self reference”.

Before diving into the programming aspects, let’s consider this in a purely mathematical form. A recursive function can be defined partly by itself. So, in the case of the Fibonacci series,


\text{Fib}(0)=0\text{ as base case 1,}
\text{Fib}(1)=1\text{ as base case 2,}
\text{For all integers }n>1,~\text{ Fib}(n):=\text{Fib}(n-1) + \text{Fib}(n-2).

Now, how would we give such a recursive function an input, such that its output is the same as its input? For example, in the following function,

\text{F}(x) = x * x

there are two inputs: 0 and 1, for which the function’s output is the same as the input provided to it. This is defined as the fixed-point of this function.

If we can now find a fixed-point p of F such that F(p) is equivalent to p, we can use F(p) or p interchangeably (since they are the same thing) as the “recursive” function without direct self-reference 7.

It turns out that for any generic λ-expression f , (\lambda x. f(x x))(\lambda x. f(x x)) is a fixed-point of f 8.

Given this, we can build a function that returns a fixed-point for any function f by taking the function in as an argument:

\lambda f. (\lambda x. f(x x))(\lambda x. f(x x))

This is known as the fixed point or Y combinator. Therefore, for any function f, Y(f) is a fixed-point of f. That is, f(Y(f)) is equivalent to Y(f).

A real world example

So far so good. But how is this useful and why should you know about it? Let’s examine the case of a Clojure function that gives us the sum of a sequence of numbers 9.

As we can see, this function invokes itself  at the very last line, which is the kind of behavior we’re trying to avoid. How would we write this without a direct self reference? One way to think about this is to express it as a sequence of function calls that starts by being given a function, and returns to us not the result itself, but a sequence of “next” functions to compute the sum of that sequence. Here’s an example,

This looks like it works. But as we can see, in order to get the sum for a two element vector, we needed to invoke this function thrice – clearly not a great use of our editing skills or time. How would we make this simpler? By simply finding the fixed point for this function, we could achieve what we set out to do! 10. Let’s start by writing a fixed point combinator.

If we were to pass the sum-seq-gen function to this Y combinator we just wrote above, life becomes much simpler.

So far so good. But the real power of a combinator isn’t just that it allows us to just write non-direct-reference recursive functions. It is that it allows us to create very useful wrappers around our functions, which can allow us to achieve all sorts of cool things without ever needing to rewrite our original function. As an example, let’s consider the use case of needing to log every internal function call that’s going on in the sum-seq-fn function. In the regular programming model, we would need to add these log lines to the sum-seq-fn itself, which is a huge overhead. But by using combinators, we can just define a LoggingY that will do this for us.

Without having to change the original function, we’ve just added some deep instrumentation into our function.

A memoization example

Let’s consider a slightly more non trivial example. What if we wanted to make a recursive function more efficient by introducing memoization? Could we write a generic non-recursive function and then apply an equally generic combinator to memoize it? Absolutely!

For this exercise, let’s define a more generic fixed point U combinator 11, which applies an “abstract” function myapply on to the function f. We can use the freedom of choosing myapply, for example, to transparently interpose memoization.

We can now create a combinator ready 12 function that returns an anonymous function that will cache a function’s arguments and results for it.

As we can see, by never having to modify the original function, we’ve been able to use a variety of specific or generic combinators to provide wrappers that can help in making programs more optimal, easier to debug and ultimately, fine-tune.


 

  1. http://www.ycombinator.com/faq/
  2. https://medium.com/@ayanonagon/the-y-combinator-no-not-that-one-7268d8d9c46
  3. http://matt.might.net/articles/implementation-of-recursive-fixed-point-y-combinator-in-javascript-for-memoization
  4. https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1
  5. http://en.wikipedia.org/wiki/Stack_overflow
  6. http://aegis.sourceforge.net/auug97.pdf
  7. Seriously, highly recommend reading up on what fixed point combinators really mean
  8. http://en.wikipedia.org/wiki/Lambda_calculus
  9. Courtesy http://www.fatvat.co.uk/2009/04/understanding-y-combinator.html
  10. This almost seemed like magic the first time I encountered it!
  11. http://lambda-the-ultimate.org/classic/message5463.html
  12. http://stackoverflow.com/questions/15859673/fixed-point-combinators

Texting should improve language skills, not regress them

If the past few generations (millenials and onwards) have grown up spending the majority of their lives communicating over the written medium – text, email, et al – aren’t they more likely to have a better command over language? Rather than be lambasted for using SMS and have people cry about how language is devolving over time, isn’t it fair to assume that they’re much better off than those who descended on the written word only a few times a year as part of a school report or forced letters to family and friends?

(h/t: xkcd)

Piqued by Piketty

The rise of the middle class has been a hugely important political and social development across the world, spanning a large part of the 20th century. How would society change if the number of jobs that have historically driven this section of the population were to come crashing down, as a result of technological progress and automation? Would widespread unemployment potentially result in the kind of social, political and economic unrest normally associated with wars? More importantly, would the economic inequality brought about by such changes cause civilization to go off track?

Thomas Piketty, an economist at the Paris School of Economics, recently published a book in which he argues that the US might be pioneering a hyper unequal economic model in which the wealthy top 1% hold the lion’s share of the national income, leading to an ever increasing marginalization of the middle class. The book has had some glowing reviews, with one reviewer terming it “An economic, social and political history of the evolution of income and wealth”. Piketty’s inspiration is wide ranging, taking cues from the books of Honore De Balzac and Jane Austen, and offers a treasure trove of data that he along with his Berkeley collaborator Emmanuel Saez have collected over the last decade. As the world bank researcher Branko Milanovic says in his review,

“I am hesitant to call Thomas Piketty’s new book Capital in the 21st century (Le capital au XXI siècle in the French original) one of the best books in economics written in the past several decades. Not that I do not believe it is, but I am careful because of the inflation of positive book reviews and because contemporaries are often poor judges of what may ultimately prove to be influential. With these two caveats, let me state that we are in the presence of one of the watershed books in economic thinking.”

Heady praise indeed.

Piketty’s main thesis is that over time, the return on investment will be higher than the rate of growth of the overall economy, implying that extremely wealthy individuals will own a bigger slice of the global economic pie. In fact, he believes that this will happen automatically without any natural factors to staunch its progress. Technology and automation will serve merely to enhance this process, given a large section of society will lose their jobs and the resulting mass unemployment is likely to create a society rife with social unrest and upheavals, further weakening the middle class.

If these arguments have wings, then it becomes entirely clear that there are world altering factors at play here. A future which holds widespread unemployment, even though technological progress has historically never failed to generate new opportunities, is hard for most to accept and plan for.

Inequality has been a matter of routine for most portions of human history. A large fraction of the super rich have been born into wealth, and it has only been in recent times  that the common man has been able to achieve parity with them through their own efforts. Since the post World War II period began, it has seemed but obvious that the reduction in inequality in various countries around the world has been a direct result of the political environment surrounding democracy and the policies that stem from it. But if one listens to Piketty, democratic and capitalistic principles don’t automatically lead to a reduction in inequality – this period was in fact an aberration rather than being the norm.

Like many other unifying theories, the development of a unified theory for capitalism has been the holy grail for a large number of economists in the past century or two. Rev. Thomas Malthus laid down the theory that the growth in population would keep the bulk of humanity trapped in poverty – and this was most definitely the case for most of human history. David Ricardo linked the value of a fixed amount of land relative to the expanding supply of other goods to the wealth of the landed aristocrats. And finally, Marx predicted that competition amongst workers and investors would drive down wages to levels that would offer bare sustenance, concentrating wealth in fewer hands.

What all of them failed to anticipate and account for was the fact that there was an explosion of productivity driven by new technology, which allowed the masses to insure themselves from the dystopic futures that had been imagined for them. This “fact” has become commonplace enough in today’s economic scenarios that no one has really debated its veracity.

Enter Piketty. With an impressive collection of data going back centuries to back his theory up, he argues that that the underlying mechanisms of capitalism are likely to reassert themselves, once again generating “arbitrary and unsustainable inequalities that radically undermine the meritocratic values on which democratic societies are based.”

There are two ways of thinking through this – the first is to determine how governments would deal with a situation like this, if it were to arise. Would a significant percentage of the population need to be supported by a welfare state? Can one then say that democratic states that adhere to a capitalistic model are actually driving themselves to a state of socialism? Mass unemployment is bound to cause social unrest at scale – what are governments to do then? Entirely new models of dealing with this would be needed. Are people thinking about this before it’s too late?

The second is about how we can check and overturn the events that are leading us down this path. Can we prevent the accumulation of wealth in the hands of a few, and prevent a class of rentiers – the small group of wealthy yet untalented offspring of the current generation which controls vast sections of the economy and strikes down competition from the talented but poor have-nots? In essence, are we saying that our economic future in a few generations will look like Europe before the First World War unless something is done about it?

Perhaps we are. So what should we do about it?

Piketty proposes is the introduction of a global progressive tax on individual net worth. Those who are just getting started in their careers would pay little, but those who have billions would pay a lot. This would not only make it easier for people to climb the ladder, but it would also inject transparency into the processes that drive global wealth dynamics by putting them under public scrutiny – as he mentions, “The lack of financial transparency and reliable wealth statistics is one of the main challenges for modern democracies”.

Sounds good on paper, but there’s more to it from a practical standpoint than meets the eye. As Tim Worstall in his excellent Forbes article points out, there is a real world barrier to how much tax can be extracted from the super rich, in the same way sales tax rates are bound by real world constraints.

Sales tax is levied at the point of retail such that the ultimate seller of the product can pay a regulatory authority a portion of the sale price. If we wanted to increase the sales tax on a given product, it couldn’t be done purely at the point of sale itself – we would have to levy it at every step it takes to manufacture that product so that every participant can recover the tax they’ve paid. This allows us to charge a pretty high tax rate (which is termed Value Added Tax or VAT) without seeing issues like tax evasion eat into the collected amounts.

An issue not unlike the one described above tends to occur when we try to levy higher wealth taxes. Countries like France do levy wealth tax at arond 1-2%, but they’re able to do this only  because the rate of return on capital is much higher than the tax rate, and the super rich are able to pay this out of their income, while maintaining their wealth.

What Piketty proposes is a tax that is much higher – so much so that it eats into the wealth itself. It’s almost like taking from the rich and giving it to the poor, except in a regulated and legal way. Robin Hood would be proud. And this is where we run into a vexing problem.

If a higher tax like this was imposed, it would mean that the super rich would need to give away money from their holdings to pay it. Assuming we were to set this tax at 10%, someone like Bill Gates or Warren Buffett would have to pay out around $7-8 billion per year in taxes, meaning they would have to liquidate their wealth (which in most cases today tends to be locked into stocks, securities and other investments such as art or jewelry). The question now is – who buys this from them in return for cash? Stocks and financial instruments are relatively easy – there are enough organizations such as mutual funds that are willing to offer hard cash for them. But how do they sell immovable assets like property (after all, one can’t just sell 10% of a mansion), or priceless pieces of art? In trying to do this, we’re reducing the value of the items that are being sold, which in turn reduces the wealth of those who hold them.

Piketty’s book doesn’t hold answers to this question – but it is definitely different from the others in that it offers not just a set of guidelines to policy makers on potential solutions to stem the rate at which inequality is increasing, but also makes a call to people on the street to “take a serious interest in money, its measurement, the facts surrounding it and its history”. As he mentioned to an interviewer,

“It’s too easy for ordinary people to just say, ‘I don’t know anything about economics, but economics is not just for economists”.

Agreed. Even if his proposals don’t end up being actionable very soon in any real manner, he has started a public debate that I hope will have very real repercussions in the way we think about income inequality and ways to address it today and in the future.

Thoughts on the Tablet era

Apple’s iPad has been the poster child of the “post PC era” ever since its inception. As the device has matured however, it has gained competition from practically every company that can build a hardware device, from Microsoft to Samsung. Its reviews have gone from praising it as the harbinger of the post PC era to how its interactions are broken to a point where it will never serve the generic handheld computing device purpose it was once slotted into.

At least by the media.

Steve Jobs knew this wasn’t going to be the case in 2010 when he said this in an interview,

When I am going to write that 35-page analyst report, I am going to want my Bluetooth keyboard. That’s 1 percent of the time. The software will get more powerful. I think your vision would have to be pretty short” to think these can’t grow into machines that can do more things, like editing video, graphic arts, productivity. “You can imagine all of these content creation” possibilities on these kind of things. “Time takes care of lots of these things.”

I agree with the sentiment. For a majority of the use cases, the iPad and others of its ilk will do just fine. But as we start to mature in our use of such devices, the simplistic interfaces that exist today just won’t cut it. What we need next are methods that make this device even more powerful than it is today – and that is by unleashing a whole new series of content creation paradigms.

Think about spreadsheets – Microsoft Excel for iPad has *just* been released – four years after the original iPad came out. And we still can’t run macros on it. Because of a policy decision somewhere in the Apple ecosystem, the most dominant end user programming language that comes with Excel is unusable on the tablet – which completely undermines one of the most powerful features that desktop Excel offers. And more importantly, not one spreadsheet with macros can run on the iPad – effectively rendering Excel for iPad useless for cross computer collaboration.

Lest one think spreadsheets are an isolated case, consider the work flow in writing this blog post and publishing it. Once I figure out what I’m writing about and what the essential facts I want to convey are, my flow is mostly split between composing text in a text editor, and using a browser to do research – gathering quotes, images, et al and somehow embedding it in the post. A trivial task on the desktop, with the availability of quick app switching, lots of screen real estate, and simple to use copy/paste. Not to mention having persistent storage on your hard drive. On the current tablet model, this simple task becomes needlessly complicated. The drawback of being able to run only one application at a time means that more time goes switching between apps than does in actually getting effective work done.

The next big revolution has to be in defining paradigms for these oft used, non trivial interactions in the touch world. The company or product that lights the way in doing so will capture a significant portion of mind-share and, hopefully, the market. Which is not to say there aren’t a few positive trends in this direction – Hopscotch, a programming application for kids that allows one to build an iPad app from within an iPad is quite excellent. For the first time, you can actually use the tablet to create content for it. But it’s early days yet.

This week saw a couple of interesting developments in the world of tablet computing though.

Microsoft released the Surface Pro 3 which, as per almost every review I’ve read so far, is being hailed as a laptop killer. After looking at videos, pictures and specs, I’m inclined to agree. It can run all kinds of native windows applications, offers a solid keyboard, a stylus for precision work and a form factor that makes it not appear as a compromise as Microsoft’s earlier tablets were wont to do. But one of the biggest disadvantages is that it tries to replace a laptop – meaning it offers a sleeker, thinner, lighter, touch screen enabled version of a traditional laptop that can compete in the ultrabook market. But there’s no innovation in the touch interaction arena there. In my book, that is a mistake.

Mary Meeker released her State of the Internet presentation, arguably the one presentation in the year which seems to be an event unto itself. In it, she presents a chart that blows away the recent meme of “tablets have peaked and are dying” – almost 80 million tablets were sold, which equals the combined numbers of desktop and laptop computers!

What this means is that all of a sudden, Microsoft has a tablet that rivals a macbook air in the kind of functionality it offers. You can run full apps on it, and it offers a trackpad to do finegrained manipulation. Apple on the other hand has an entrenched tablet that hasn’t really moved the needle recently in terms of game changing features, and offers watered down versions of full scale desktop applications that the Surface can run.

What we’re missing is someone to show the way on what the next generation of tablet interactions are going to look like.

Nehru and Modi

Quoting Ramchandra Guha,

In his pomp — which ran roughly from 1948 to 1960 — Nehru was venerated at home Representative are these comments of The Guardian, written after the Indian prime minister had addressed a press conference in London in the summer of 1957:

A hundred men and women of the West were being given a glimpse of the blazing power that commands the affection and loyalty of several hundred million people in Asia. There is nothing mysterious about it. Mr Nehru’s power is purely and simply a matter of personality. … Put in its simplest terms, it is the power of a man who is father, teacher and older brother rolled into one. The total impression is of a man who is humorous, tolerant, wise and absolutely honest.’

Perhaps written unwittingly, those last sentences have the potential to become the guiding light to anyone who seeks to lead any group of people towards anything.

As Narendra Modi steps into the Indian Prime Minister’s role today, it would be wise of him to keep those sentiments in mind and look to guidance not from his immediate set of predecessors, but atleast in some respects, from the man who started it all.