Author: gkjohn

  • Thoughts on the Freedom to Read

    (Written as part of Nilanjana Roy’s series on Banned Books Week)

    It isn’t always necessary to ban a book to ensure it isn’t read. With six in ten children unable to read grade 2 texts and half of all children in public schools graduating without being able to read fluently, most books will remain unread. The public education system has created, and continues to create, generations of children for whom books are effectively banned.

    For the fortunate few who can read, there are further roadblocks on the road to reading. First, of the 80,000 odd new books published each year in India, only around 30% can be considered children’s books. And of those few, 50% are published either in Hindi or English. This, in a country that has 21 official constitutionally recognised languages and many 100s more used across the nation. It is fairly appalling that while the United Kingdom prints close to 6 books for every child, the Indian equivalent is 1 book to every 20 children.

    To make the “Freedom to Read” a meaningful proposition in India there are at least three elements of the puzzle that require bolstering. We need more content, in more languages and ways of circumventing the high cost of distribution in India. ‘Innovation’ is a much-abused term that in the context of the Indian children’s book publishing industry, has invariably begun and ended with product and price strategies. For the “Freedom to Read” to be truly effective, publishers will need to create new models of innovation to address the entire content cycle – from the creation, distribution and consumption to the conversation around content – to make an impact in the gargantuan problem that this space represents.

    These are not unrealistic expectations either. At Pratham Books, where I work, we have been piloting many such innovations across the spectrum and have diverse learning from these experiments. At the product level, we now have products that span the range from Rs. 2.00 to Rs. 30.00 and have multiple product forms as well – from story cards to books to newer folded paper story formats that are very low cost but maintain high quality standards of product and content. The distribution of reading material, so long as it is tied to physical formats, remains challenging but we have explored new avenues including non-traditional ones such as the railways and the postal service. However, the largest innovations have come at the strategic level – of what it means to be a publisher.

    As a publisher, a constant question we ask ourselves is what the dimensions of our mission statement of “A book in every child’s hand” are and what the contours of the problem we are solving are. For example, producing low cost (yet, high quality) books might very well mean that, if access to reading material is the problem, we are only moving the problem to a lower price point. Similarly, does it have to be a Pratham Books’ book in a child’s hand or does it suffice to enable a child to have access to any book?

    However, some of the more vexing questions go to the heart of being a publisher – were we acting as content creators and gatekeepers of content? What the rationale of keeping content, that had been published but might not be re-published, locked up by asserting copyright over it and whether there was greater value in setting such content free. Given our audaciously large mission, we had to find ways to create infinite good with finite time and resources and in the process, to create more value, within the ecosystem, than we capture. With this background, we realised that innovation at a process, product and service level alone was not enough and that we needed to innovate at a business model, strategic and management level.

    Having answered most of these questions using “openness” (whereby, we asked whether allowing unrestricted access to use and re-use of their content furthered our mission) as a test and finding that it did fit our mission, the second set of questions to answer was more technical – how, as a small non-profit, do we accomplish ”openness” and not find itself overwhelmed and sapped of resources. It was at this point that we had a moment of realization – that reading is an extremely social activity and that there are communities and organizations that were more than ready to help it achieve its goals. While much has been written about this model of ours (see: http://wiki.creativecommons.org/Case_Studies/Pratham_Books and http://blog.prathambooks.org/p/cc-tracker.html) I will restrict myself to two elements of this strategy that strengthens the “Freedom to Read” in the context of the challenges laid out at the outset.

    What this new content model, as outlined above, has allowed for is the creation of multiple derivative works, using a single Pratham Book’s book as the catalyst in languages we are unable to publish in, locally printed in places we are unable to deliver to and read in formats we are unable to publish in without any negative impact (we might go so far as to say, it had a positive impact) on our revenue streams. Most importantly, it allows for an inclusive “Freedom to Read” where even those who are print challenged have content made available to them in formats that they are able to consume.

    Secondly, this ‘free’ content allied with our mission, has created an incredibly engaged, vibrant and active community around reading. A community of champions that are foot soldiers in making this right to read significant – over the last three years, we have experimented with assisting individuals hold book reading or book launch sessions in geographies in which we are absent. This came to pass as a request from the community itself – that while Pratham Books could not be present everywhere, the community was and all they needed was a minimum amount of material support to extend our reach. Year 1 saw 19 such events on a single day, year 2 saw 54 and this year, we had over 400 reading sessions conducted by over 170 champions covering 28 states and 2 union territories done in 5 languages that the book was published in along with 9 new languages that the book was translated in to by the champions and impacted over 18000 children. All in one day. (see:http://prathambooks.org/1-day-1-book-250-sessions and: http://champions.prathambooks.org)

    For the “Freedom to Read” to be effective, innovation must be at the core of the publishing world, not merely at the periphery and we must leverage the power of the collective to achieve this societal goal. But mostly, it involves us, as individuals and as a nation, acknowledging that it as an important and basic right. Without that, stasis is all that remains.

  • Thoughts on the Future of Content

    (I recently spoke at the India Social Summit in Delhi on the Future of Content. My slides, video and a transcript of my talk is below. I am deeply indebted to Nandita Saikia for transcribing my talk.)

    Video:

    Slides:

    The Future of Content – Gautam John at the IndiaSocial Summit 2012

    View more presentations from India Social
    Transcript:

    So, I’m ostensibly talking about the future of content but what I’m really going to do is trace an arc of content from historical times to the present to see if anything has really changed. So when we first think about content, be it in the modern age or in the historical age, we think about books is something that we can touch, or music that we listen to, or video that we watch. And, truthfully, the content age we imagine started with Gutenberg and the printed Bible.

    What that allowed us to do is to actually produce accurate copies faster, and most likely cheaper, than ever before. So it was the Gutenberg press and the Gutenberg Bible that pretty much kicked off our entire content revolution, so to speak. But that is not to say that there has not been content before that. Before Gutenberg, monks in monasteries handwrote books, and what’s interesting to note is that we lost something with the Gutenberg revolution; we lost the ability to make changes. Whereas when monks wrote and copied out manuscripts and books, it wasn’t rote copying. What they did was introduce small little changes that kept stuff contemporary and changed with the times. 

    And even before that we had, in India, the guru-sishya system of learning and of storytelling, which allowed stories and content to morph with time to take on local flavour and context. And in some way, when Gutenberg invented the printing press, we lost that entire subtlety that have content had — which begs the question: what does all of this content do?

    Content has historically, and today, been what I’d like to refer to as social objects. They have catalyzed conversations. The old coffee shops of London, and of Europe, were centres of conversation, and this conversation always was catalyzed by objects of content – be it a talk, be it a newspaper, be it a book, be it a political philosophy. Coffee shops have played a role in revolutions as well, and I’d like to think coffee shops were the original Twitter. They talk about content, people voice unasked for opinions, and they have fomented revolutions as well.

    When you talk about content, the next thing that people consider is piracy. That’s the big, big, big thing. And piracy isn’t new. As far back as 1557, the Queen of England gave the Stationers’ Company an exclusive monopoly to publish and copy content in United Kingdom. They were the only people who could make copies, and in 1603 was the first recorded use of the word ‘pirate’. People who made unauthorized copies of content were called Pirates.

    Of course, this entire model of content is something that I would like to look at in a more structured to way and as what I call ‘the content lifecycle’. And as I see it, the content lifecycle has four essential ingredients: [One:] Is the creation of content, and the creation can be in multiple ways. It can be, for example, the Gutenberg press, rote copying, or with manuscripts introducing small modifications. [Two:]  The distribution of content be it through bookshops and centralized models, or even through peer models. [Three:] The consumption of content which otherwise, as we like to say it, to watch a movie, to read a book, or to listen to music. And the last one, which actually closes the loop, is the conversation around the content.

    And the truth be told, historically, till about 20 years ago, these conversations were all horizontal groups of peers but never really exploded on a global scale. And what made that possible was the Internet, of course. What the Internet has primarily done in terms of content is three things: the first one is that geography is now history. I’m as close to Manu who lives close to my house as I am to people in the US who live far away and I have never seen. And what the Internet has allowed us to do is (a) break down geography as an issue, and (b) to accelerate the speed of conversation and to amplify it, in the sense of the velocity of discussion, so to speak. And the third most important thing from a content perspective is, because the Internet is all-digital, it functions as one big copying machine. Bits and bytes are far more easily copied than physical objects.

    When you bring content and digital era together, what are some of the things that could happen? For one, we longer have the printed Encyclopaedia Britannica; first published in 1768, 2010 was the last hardcover edition. If I look at this through the lens of the content creation model that I posit, it’s interesting to see and examine why, perhaps, this is the last edition of the printed Britannica model.

    Britannica is produced in a very traditional, hierarchical notion; the content creation is done by experts, and is done in a slow, methodical, analytical, and quite truthfully, verifiable, manner. It has then been historically been distributed through one format which is the 26 volumes of Britannica that grace our shelves, or the World Book which was popular in India. It’s only ever really used as a referential encyclopaedia; no one ever says, “I want to read Britannica today,” and flips open a book of Britannica and continues reading it. Nobody ever says, “What does someone think about Bangalore?” and open Encyclopaedia Britannica anymore. The third one is around the consumption which is very specific and not really consumed widely. And the fourth one is the conversation loop. Very rarely do we have conversations about Encyclopaedia Britannica. 50 years ago, sure, it was the de facto reference piece but that’s no longer the case.

    And if you contrast this with what many people say caused the death and downfall of the Encyclopaedia Britannica — Wikipedia — you’ll see that that content cycle is completely different. The creation is not done by high priests of knowledge; it is done by you and me, it is validated by you and me. The distribution is not via print although you can buy Wikipedia in print; Wikipedia is available on as many devices as you own, and then a few. The consumption is fantastic; Wikipedia links are tweeted every day. I am having an argument with Kartik about the nature of global warming; I will tweet him a link about Wikipedia and not about Encyclopaedia Britannica. And the fourth one which is really, really important is the conversation loop; what Wikipedia has built on is that conversation: Wikipedia’s content. And that content has been built upon by the conversation that goes on not just between people using Wikipedia that people editing Wikipedia. Editors talk to each other, editors keep content current. And that to me is the huge charm of what the digital era allows. 

    Of course, what I find charming, other people find terrifying. And the truth of the ma
    tter is, I think, as much as we have nostalgic memories of the old, we’ve passed the inflection point where that can actually happen, and the sooner we recognize that, the sooner we can build more innovative models. But something to keep in mind is that nostalgia will always have a premium. There will always be a place for older models of content and older forms but those will necessarily move to more premium models. Which, of course, is a good thing because the amount of content that’s available to us is just enormous.

    The challenge for content creators really is that the distinction between the amateur and the professional is blurred. The amateur and the professional have access to the same tools, the same networks of distribution and the same limited attention span that you and I have. So the question really is: If everyone’s producing content, if everyone has access to the same tools, what are some takeaways content producers and businesses built on content can do?

    The first one I’d like to think of is how we consume content today. So, this is an interesting comic from someone call the Oatmeal put up recently. (I don’t know if you saw it.) He read this wonderful book called ‘The Game of Thrones’ and then he said, “Oh! There’s a TV series out because people on Twitter are talking about it,” and he goes to watch it on NetFlix (of course, it’s not available), then he says, “Let me buy the DVD,” — and of course, as we all know, in India, because we’re Region 6 on the DVD, we get DVDs much later. So, finally, he does what most of us would do — not me, of course, but most of y’all — he downloads the torrent, which is consuming content, but it’s just a possible loss of revenue for an organization even though he’s tried really hard to make that [i.e. buying a copy] possible.

    The four broad trends I’d like to touch upon — the first one is: I think the notion of control of content is over. The same thing that digital technology makes possible, it also makes impossible. We can no longer control the content. DRM is dead. DVD region encoding is dead. I’m happy that FlipKart agrees with me that DRM is dead. And the truth of the matter is that content now needs to be always available. There is no more artificial scarcity that we can introduce into the content ecosystem. You cannot say the US will watch it before me because I’m speaking to the people in the US everyday, and I want to watch it when they are talking about it. So content has to be globally available and always available. You cannot close your store like IRCTC does between 10 PM at night and 5 AM in the morning. It’s not going to happen because the bit torrent store is always open.

    The third thing is accessibility, and I use accessibility in two ways. The first is: I want to buy content once and consume it everywhere. You, as a content producer, either need to make that possible or I will go on to make it possible. I will rip my DVDs, I will download my e-books because if I have the printed book, I don’t see it as morally ambiguous to do that.

    The corollary to accessibility is something that I’d briefly like to touch upon about open standards. I think we now have the technological ability to make things consumable everywhere because we are driving towards a world with open standards. The other rationale for accessibility and open standards is that we now have the chance to include everyone in the consumption of content: the visually impaired, the print impaired — technology makes this possible. These content producers need to embrace open standards and make accessibility, not just for those of us who want to access content on multiple devices but for those of us who have never been able to access content historically otherwise.

    The second point on content is around the consumption of content. We have traditionally looked at content consumption as something that is very individual, very personal: to read a book, to listen to music, and to watch a movie alone — and these are things that we glorify. And those will always exist — I don’t think those they’re going away — but what we now have the ability to do is transcend the consumption of content from an individual activity to a shared group experience; to actually consume content the way we live in groups. And what’s fantastic about that is that while the Internet makes this possible, it also makes possible the viral spread of this content consumption, and also, the word-of-mouth marketing, which is by far the most powerful way of marketing. So, to make your content consumption possible in groups and as an experience is also to make the viral spread through word-of-mouth marketing possible. 

    Of course, this is fairly worrying and I think every epochal shift is marked by people bemoaning the loss of business models, and much use of law and technology to limit what is possible and to also criminalize what we otherwise called sharing. But there’s something very interesting that I also read recently. Nina Paley — I don’t know if you know her. She made this wonderful animated film called ‘Sita Sings the Blues’ and then not only did she make it available for free, she also made it available for people to remix and translate — she was recently addressing a group of 17 and 18 year old in the US, and her big concern was, you know: “Do you guys always want to download everything and just use it for free?” And their point was, “No, we don’t want everything for free; we just want everything.” And I think that’s the key takeaway for us to make.

    I’ll briefly touch upon six broad takeaways that I had: one is to move from content to experiences because experiences are far more valuable. Content is based on an artificial scarcity whereas experiences are authentic and real. We need to move away from the ‘content is king’ model and move to the ‘people are king’ model. Dina briefly touched upon insight yesterday, and I think the corollary to that is that we now know more about what you and me like to listen and watch and hear and play with than ever before, and the content industry can no longer sit in its ivory tower and say, “Here is what you will listen to.” We have the ability to tailor and customize experiences.

    I think the models of protection, be it law or technology, are fast dying and we need to move from protection to sharing. That solitary consumption of content will move to group and shared experiences. And what’s tremendously powerful there is that you then have the ability to influence not only what other people will watch and hear and listen to, but also the kinds of content that are being created. That the content industry needs to move from being gatekeepers of content to curators of content, and that top-down models of content creation will go the way of the dinosaur very soon because (a) we have the Internet to distribute content and (b) tools are available to everyone. So the high priest model of content creation will very soon be challenged, as we saw in the case of Encyclopaedia Britannica, by the community models of content creation.

    And I’d just like to thank three people for their insights and comments in to this presentation: @dina @sumants and @karthik_perumal

  • IndiaSocial Summit 2012 — Future of Content

    A Power Talk at the IndiaSocial Summit 2012, on the future of content.

    Gautam K. John, then with the Akshara Foundation and an advisor to Inclusive Planet, giving his presentation at the Power Talks: Future of Content session, IndiaSocial Summit 2012.

    Transcript

    This transcript was generated with AI-assisted transcription and may contain occasional transcription or speaker-attribution errors.

    (more…)
  • Thoughts on Incentives in Public Primary Education

    Video of the talk I gave at The Takshashila Shala in Chennai on the 7th of Novermber, 2011. Based almost entirely on the work of Karthik Muralidharan.


    Video streaming by Ustream

  • Thoughts on Civil Disobedience

    (Written for the March 2011 issue of Pragati.)

    An inquiry in to the social, cultural and historical narrative around civil disobedience as a legitimate tool of public protest would traditionally highlight the successes of these movements – Egypt in 1919 against British occupation, Gandhi and the Indian independence movement, the movement against Apartheid, the American civil rights movement and more recently, in movements across the Baltic states. Tempting, as it might be, to believe that civil disobedience is a legitimate tool that citizens can deploy in situations that are morally or politically at odds with their individual consciences, a question worth asking is whether it is a legitimate tool within the framework of a democratic State?

    Martin Luther King, writing in his letter from Birmingham Jail, states that “In any nonviolent campaign there are four basic steps: collection of the facts to determine whether injustices exist; negotiation; self purification; and direct action.” He advocates these as escalating steps, that the first step is to gather evidence that such injustices, are in-fact, occurring, that armed with such evidence, negotiations are entered in to and only when such negotiations irretrievably fail or are unavailable should direct action be resorted to. While direct action seems to imply more than just peaceful confrontation, King explains that direct action is to be non-violent and “ … seeks to create such a crisis and foster such a tension that a community which has constantly refused to negotiate is forced to confront the issue. It seeks so to dramatize the issue that it can no longer be ignored.” Or, to put otherwise, when the traditional models and pathways for citizen engagement have broken down or can no longer be relied upon.

    Recently, Dimitris Reppas, the Greek minister for public transport, stated the Government would not let “Greece [be] exposed to the risk of international disrepute and marginalisation, [the] destination of countries characterised by anomie. The attack on the social acceptability of the free-rider and the political dismantling of its simulacrum of progressiveness is paramount.” As convoluted as that statement is, it is worth remembering that Government’s, in democratically elected situations, derive their legitimacy from citizen and that the social contract is not absolute –  such that the mandate is in itself a blanket justification for policies and laws that are opposed for the opposition is derived from their individual moral and political conscience that cannot be seen to have been abrogated by an election. As Costas Douzinas writes, “Our implicit promise to obey the government does not mean blanket acceptance of its specific policies. A controversial policy does not become automatically legitimate because it has been enacted in parliament and become law … This is where the right and duty of civil disobedience enters the scene. If state laws and policies conflict with basic constitutional principles, the supposedly highest expression of popular sovereignty, the obligation to obey disappears and dissent replaces consent.”

    A common criticism has been that democracies offer multiple methods of resolution of differences – regular elections allow those opposing a Government’s policies to vote them out of power and a legal system allows for multiple levels of challenge to laws and policies that are seen as unjust. However, these arguments often fail to take in to account that opposition cannot always be expected to wait for an election cycle – there is no obvious reason an unjust policy should not be challenged at the point of instatement. Further, where there are special interest groups that direct and shape policy, a civil movement might be the only available recourse to bring to attention an injustice. Not all civil disobedience movements confront issues that can be raised in a court of law. In many instances, where the law law trails social trends and moors, a movement of civil disobedience might be the only way to challenge such legal norms that are clearly out of sync with social norms. An important corollary to this is that it is necessary to disobey an unjust or unconscionable law in order to bring a legal challenge to it and that such individuals engaged in civil disobedience are willing to bear the consequences of such action.

    The discourse around civil disobedience is complex and layered and touches upon multiple other manifestations of civil disobedience – whether violence is acceptable, whether non-cooperation is a valid tool to be used within this context, whether it is a recipe for anarchy if multiple groups chose this as a model for their own struggle against what they perceive to be morally transgressive laws and more. While the answers vary depending on the philosophical school of thought one engages, a common enough thread is that there is a strong moral mandate for individuals to follow their own conscience and that it is this that forms the basis of civil disobedience.

    Rawls’ account of civil disobedience centers around the notion a nearly just society but real world exigencies point to a state of affairs far from this notion and civil disobedience movements, have, in many ways morphed to deal with these realities. However, remain, fundamentally, movements that center around conscientious opposition by individuals and a willingness to face the political and legal consequences of such action.

  • Thoughts on Unicode in India

    I have been thinking about multilingual publishing on the Web and in other digital forms, specifically in Indic languages, in the context of my work at Pratham Books and around Wikipedia and the more I think about it, the more I believe it to be crucial in enabling the preservation of language and culture as also in being able publish and to spread knowledge and culture.

    These are early thoughts that I will refine over the next few months in to a white paper and would welcome feedback on this.

    1. While there are many ways to achieve a legal framework for inter-operable content (CC, GFDL, PD or the Copyright Act Amendment for the Print Impaired) etc. there needs to be a technical framework for such interoperability as well.
    2. Given that we (Pratham Books) publish in Indian languages, using Unicode fonts are the only way to achieve cross-platform interoperability and is a global standard.
    3. Given India’s push towards copyright reform for the print impaired, it is imperative that Unicode fonts be used in the creation of Indic content because it is otherwise a huge barrier to conversion to print-friendly formats.
    4. Unicode, being an open global standard guarantees content accessibility in the future and ensures no proprietary font and vendor lock in.
    5. The limitation is on the lack of high quality and varied typefaces that are both screen and print optimised Open Type Indic Unicode fonts.
    6. Given the importance of linguistic diversity to India’s cultural heritage, it is imperative that greater attention is paid to the development of such fonts under licenses that allow for free re-use and to fix issues in the fonts that might arise.
    7. The Govt. should fund the open development of at least 5 such fonts for each the 21 Constitutionally recognised languages and make these available not just for free, but under free license to re-use and improve as well.
    8. The GoI has recognised this and notified Unicode 5.1.0 as the de-facto standard for all eGovernance projects. This standard needs to be more widely adopted for all Government digital projects and any software or content procurement as well.
    9. Use of Unicode will significantly reduce bandwidth/storage as they are more efficient, allows for universal search (within a page/web search etc.), sorting and indexing, for text-to-voice synthesis, for machine translation and allows for greater and better search engine optimisation.

    I have to add that I am not an expert on this field and may have things down completely incorrectly – I really do need your help in working on this, please.

    Update on 22.02.2011

    Thanks to Santhosh over at the Wikimedia India mailing list, I have learnt much. In particular that Unicode isn’t a font as such but a method of encoding information.

    From what I understand – there are three components:

    1. Input (Different types of keyboard layouts are used but are independent of the method of encoding.)
    2. Encoding and storing the input (ASCII is the older method. Unicode is the standard.) 
    3. Representing, visually for the human user, what has been inputed and encoded. (Font or typefaces and these are, to an extent, independent of the encoding method used.)

    An excellent resource of “List of available Indic fonts for scripts encoded in Unicode.” http://indlinux.org/wiki/index.php/IndicFontsList

    Update on 17.03.2011

    To add: http://generalprop.newschallenge.org/SNC/ViewItem.aspx?pguid=900d111c-1475-4d78-8fab-789663818724&itemguid=696ce9df-02c5-4585-8062-a96e57ef50f7

  • Thoughts on Wikipedia in India

    Wikipedia turned ten on the 15th of January 2011 and its history is both well known and fairly well documented. Globally, Wikipedia is the fifth most popular website with the English Wikipedia being the most popular destination. What is far more interesting to note, however, is that close to 98% of the traffic from India was to the English language Wikipedia with the remainder travelling to an Indic language Wikipedia which raises a question of interest  – what is the history of Wikipedia in India? 

    Wikipedia is popular in India – current data shows that it is the seventh most popular site in the country and comes out ahead of many popular sites including Twitter and Orkut. While it is well nigh impossible to pinpoint the first edit or the first person who read or edited Wikipedia in India, it is possible to use proxies for this investigation. 

    The article on India on English Wikipedia was first created on 26th of October 2001 and languished for many years – between 2001 and 2003 it saw only 199 edits. 2004 saw 1700 edits to the page, 2005 had 2311 edits and contributions peaked in 2006 with 6752 edits. From 2007, the number of edits have steadily dropped and the period from 2007 to 2011 have seen a total of 6925 edits. This page is watched by 2329 people who maintain a constant vigil over changes made, was viewed 1,313,608 times in December 2010 and was the 39th most viewed page on the English Wikipedia. 

    The India page is now available in 216 languages, has been a featured article in 9 languages and is linked to from over 1500 other pages. A reasonable inference to make is that interest in Wikipedia in India broadly corresponded with the time-line for the evolution of the India page and Wikipedia is now available in over 20 Indian languages with a further 20 Indic languages in incubation. 

    However, India and Indian language Wikipedias seem woefully under-represented when one compares the size of the pool of native language speakers with the number of articles on each respective language Wikipedia. Further, it is worth noting that the Wikipedia community in India is necessarily very different from similar communities across the world because of the diverse languages that are a part of the Indian identity. In terms of size, Hindi is the largest Indian language Wikipedia with Telugu, Marathi, Bishnupriya Manipuri and Tamil making up the top five Indian languages Wikipedias, though none of them have over 100,000 articles; Hindi being the biggest with around 67,221 articles. The first Hindi article was begun in July 2003 and the Hindi Wikipedia crossed 1000 articles in September 2005, the first Telugu article in December 2003 and Telugu Wikipedia crossed 1000 articles in October 2005, the first Marathi article in May 2003 and Marathi Wikipedia crossed 1000 articles in May 2005, the first Bishnupriya Manipuri article in August  2006 and Bishnupriya Manipuri Wikipedia crossed 1000 articles in November 2006  and the first Tamil article in September 2003 and Tamil Wikipedia crossed 1000 articles in August 2005. However, Oriya, Punjabi, Assamese and Malayalam were the first Indian language Wikipedias and were all started in 2002.  

    Writing in the September 2010 edition of the Wikimedia India Newsletter, Shiju Alex and Achal Prabhala opine that:

    “Indians working on English Wikipedia form perhaps the most active Wikimedia community in the country. This might be surprising for many people outside India, but within, it is fairly obvious that English is an important Indian language (it is one of India’s “official” languages) and also the most significant bridging language between different language groups. Indeed, English is the language that connects Wikimedians from various language groups in India. What we call the “mother tongue” (i.e. the native Indian language of one’s parents) is usually not English, and yet for a number of people, English remains the preferred operating language in educational, professional and online life.”

    Given the varied language communities in India, it is worth noting that several language communities have been very active and have been a primary factor in driving editorship in their respective languages. Common to all these language communities are outreach activities with a growing number of regular meet-ups across the country (Bangalore has had 23 consecutive community meet-ups since July 2009 with one being held every month.), Wiki Academies (hands on tutorial sessions on how to edit Wikipedia) and other such outreach processes that are very important to evangelize Wikipedia projects and bring new editors in to the fold. In parallel, there has slowly been traction from Governments as well. The Malayalam Wikipedia community recently released an offline version of Malayalam Wikipedia containing 500 selected articles and was distributed by the Kerala government to thousands of schools in the State. The Tamil Nadu government recently released a glossary of thousands of technical terms that were collected by the Tamil Virtual University for use in the Tamil Wiktionary project and also organized an article competition across the State covering over 3000 universities and colleges, which has introduced Wikipedia to a very large new audience and brought new editors into the fold. It is also worth noting that the National Knowledge Commission recognizing the importance of free, easy and open access to knowledge had in its recommendations on Open Educational Resources noted that:

    “Our success in the knowledge economy hinges to a large extent on upgrading the quality of, and enhancing the access to, education. One of the most effective ways of achieving this would be to stimulate the development and dissemination of quality Open Access (OA) materials and Open Educational Resources (OER) through broadband internet connectivity. This would facilitate easy and widespread access to high quality educational resources and drastically improve the teaching paradigm for all our students.”

    This is important because Wikipedia and its sister projects are some of the largest repositories of  Open Educational Resources in the world.  

    In a case study on the history of the Tamil Wikipedia L.BalaSundaraRaman traces the history of the Tamil Wikipedia:

    “Tamil Wikipedia was started on September 30, 2003 by an anonymous person by posting a link to their Yahoo! Group and the text manitha maembaadu, fittingly, a phrase that means human development, on the main page. However, for several weeks after that, the site had an all-English interface with little activity. Mayooranathan, in response to a request posted in a mailing list, completed 95% of the localisation between November 4, 2003 and November 22, 2003. He made some anonymous edits alongside. On November 12, 2003 Amala Singh from the United Kingdom wrote the first article in Tamil, but with an English title Shirin Ebadi.The earliest editor who continues to edit actively, Mayooranathan, has written more than 2760 articles and has kept the project alive during an intervening period when practically nobody else was editing. Around five active editors including the author joined the project in the second half of 2004. Some occasional editors turned out to become regular editors and the Wiki started growing steadily. Bugs were reported to fix the interface, policies partially deriving from the English Wikipedia were initiated, and editors started to specialise in tasks like stub sorting, creating templates, copyediting, wikifying, translation, original writing etc. Even at this early stage, the Tamil Wikipedia had a global editorial team representing almost every continent. After registering a period of high linear growth in several metrics on a lower base, the Tamil Wikipedia started witnessing, around April 2007, a low linear growth on a higher base in several quantitative
    metrics. This period, however, also showed a perceivably super-linear growth in article quality aspects like length, standard of prose, image use, inline citation usage, etc. Late 2008 to early 2009 was a period characterised by a near constant number of active and very active editors, a steady influx of new and occasional editors, a healthy, enthusiastic and continuity-preserving churn, and, above all, optimism for a promising future.”

    There have also been some technical challenges around the historical lack of growth in Indic language Wikipedias, in particular in the area of openly licensed and freely available Indic fonts, difficulties with the cross-platform display of Indic text and the lack of standardised cross platform Indic language text entry tools. There have been and continue to be many approaches to working on these problems – it is a focus of the Wikimedia Foundation, of language communities and private organisations. Google and Microsoft have both released tools to help solve these challenges and assist in translation efforts. 

    This inequitable distribution of content by languages, skewed towards English and languages of the traditional geographies of the Global North, has been a frequent point of discussion and has been a point of focus for the Wikimedia Foundation. Among other things, the Foundation’s strategy plan aims to foster the growth of smaller Wikipedias – by 2015, the aim is to have 100 Wikipedia language versions with more than a 120,000 “significant articles” each. To this end, the Foundation also aims to bootstrap community programs in key geographies: India, Brazil, the Middle East/North Africa. In particular, Achal Prabhala, a member of the Wikimedia Advisory Board, has spoken about the need for local representative bodies of the Wikimedia projects, or Chapters, in countries which are linguistically under-represented. He argues that that there is a distinct relationship between local growth and the existence of local Chapters and that geographies in the South present an enormous opportunity for growth. 

    Wikimedia Foundation’s India Chapter has had a long history. First efforts to set up a Chapter began in September 2004 with an Internet Relay Chat meeting and efforts continued through to November 2007 when there was another round of discussions on the India mailing list and draft bye-laws were drawn up. However, the efforts to set up an India Chapter received a huge boost with two things – Sue Gardner and Jimmy Wales visiting Bangalore in December 2008 and  regular Wiki-meetups in Bangalore that were made possible by the Centre for Internet and Society. In July 2009, renewed discussions and activity commenced in connection with the setting up of the India Chapter and this culminated with India becoming the 29th chapter of the Wikimedia Foundation in July 2010. The Wikimedia India Chapter was granted registration (registered name: Wikimedia Chapter) by the Registrar of Societies, Bangalore Urban District on the 3rd of January 2011. 

    The Chapter’s fundamental mission is to catalyse the usage and editorship of Wikipedia in India, as well as foster Indic language content. To this end, there are multiple tracks the Chapter will need to take – Content, Technology, Outreach, Collaborations, Offline Work, creating Special Interest Groups and Projects.  

    The Wikimedia Foundation, recognizing the importance of India to its growth strategy and understanding the potential in this relatively under-represented and untapped market, recently appointed Bishakha Datta as a member of its board of trustees and has announced that it will soon open its first office outside of the United States in India. As a testament to the growing popularity of Wikipedia in India, the 15th of January 2011 saw over 90 concurrent events celebrating the tenth anniversary of Wikipedia across India, many of them being organized spontaneously by small groups of interested community volunteers, with large local participation and substantial media coverage. 

    Aside from the organic growth of Wikipedia and local language communities, the development of Wikipeda in India would appear to be only just entering its active growth phase. With the continued growth of the Indian economy, the expected growth of Indian internet users, the advent of cheap and ubiquitous wireless internet access, an active Chapter, a Foundation office in India and the support of India’s relatively free media, the future of Wikipedia in India looks bright and well set for the decade ahead.  

    ____

     

    (The author would like to thank the team that put together the Wikimedia India Community Newsletter in September, 2010, which is available here: http://commons.wikimedia.org/wiki/File:Wikimedia_India_Community_Newsletter_2010_September.pdf. This is the best overview of the state of Wikimedia and Wikipedia projects in India and is well worth reading. This current piece would not have been possible without this Newsletter. The case study on the Tamil Wikipedia can be read in its entirety here: http://ta.wikipedia.org/wiki/%E0%AE%B5%E0%AE%BF%E0%AE%95%E0%AF%8D%E0%AE%95%E0%AE%BF%E0%AE%AA%E0%AF%8D%E0%AE%AA%E0%AF%80%E0%AE%9F%E0%AE%BF%E0%AE%AF%E0%AE%BE:Tamil_Wikipedia:_A_Case_Study)

     

     

     

     

  • Thoughts on Creative Commons Licenses

    At Pratham Books, we have a very simple mission – “A Book in Every Child’s Hand” and this drives all of our work and we constantly test what we do against this goal. The mission has two parts, one is to create more reading matter such that there is more available for children to read and the second really is a corollary – that we need to be able to get books to where children need it the most and that the books need to be culturally and linguistically relevant as well.

    This is where our challenge lies – to massively scale the production of high quality, low-cost children’s books for a massively multi-lingual and multi-cultural market. Looking at this challenge it is fairly obvious that this is not a problem that any one organization can solve. The solution has to be scalable, flexible and catalyse our fundamental mission as well.

    At this point, we realised that there were several internal questions to answer and some of them painfully introspective. Questions as to whether the books we create and distribute have to be a Pratham Book, whether it implied that every book must be paid for by either the reader or an intermediary and, from being a publisher, questions as to whether we are gatekeepers of content or content curators, how we could create infinite good with finite time and resources and most importantly, how we can create more value than we capture?

    Having answered most of these questions using “openness” (whereby, we asked ourselves whether allowing unrestricted access to use and re-use our content furthered our mission) as a test and finding that it did fit our mission, the second set of questions to answer was more technical – how, as a small non-profit, do we do this and not find ourselves overwhelmed. It was at this point that we had a moment of realization – that reading is an extremely social activity and that there are communities and organizations who were more than ready to help us achieve our goals.

    It was at this juncture that we hit upon the Creative Commons licensing model as one that would help us achieve many of our aims of flexibility, scalability and being able to help catalyse our mission of a book in every child’s hands. In particular, three things stood out – a shared value system of sharing and openness, a community that was deeply embedded in these ideals and, from our perspective, it was scalable because it allowed us to license content to multiple organizations and individuals, both known and unknown, with a one time effort of releasing them under a Creative Commons license as opposed to the traditional model which involves time consuming negotiations and discussions with each known organization or individual who wants to use our content.

    As an organization, we did spend some time choosing a license and, from our perspective, a choice between openness and sharing which reduced to a choice between the Attribution and Attribution-Share-Alike license. We have decided that the Attribution license will be our default license with a fall-back to the Attribution-Share-Alike license in cases where needed. It is best said by P2PU “it emerged that our choice lay between two licences: Creative Commons Attribution and Creative Commons Attribution Share Alike …chose to use Creative Commons licences because Creative Commons have become a global standard and are supported by a large international community. Both licences are Free Culture licences and are more permissive than any of the other Creative Commons licences . In other words, the choice was not between two extremes but between two open licences at the same end of the licence spectrum.” Given that our goal was being as open as possible, it followed that our license choices were essentially around licenses that allowed for the greatest possible use and re-use because our initial hypothesis was, and continues to be, that being open allows us to fulfil our mission better than a traditional copyright model allows.

    We now use Creative Commons licenses everywhere! We license entire books under CC-BY and CC-BY-SA licenses, we license our illustrations similarly and even photographs and other publicity material too. Over the last year we have been building the foundations for a social publishing model – where we curate communities that are passionate about reading and help us create content. Such a model rests on the idea of a participatory culture and an essential ingredient is a permissive licensing strategy – Creative Commons licenses offers us this, a large community with shared values and an ecosystem to tap in to.

    While this licensing and publishing model works well in theory, it has been extremely heartening for us to see it come to life – our communities have created multiple derivative works ranging from iPad and iPhone applications, to porting our works to OLPC laptops, to creating entirely new books from existing illustrations and, my personal favourite, creating versions of our books for the print impaired – from DAISY and Braille books to rich audio books such that our mission truly does encompass every single child.

    I firmly believe that we would not have been able to achieve what success we have had without the help of Creative Commons licensing. These licenses and the values that they stand for are vital to building and strengthening a digital commons from which we all benefit. I hope you will consider supporting Creative Commons and licensing content that you own or control such that we all benefit from the growth of the commons.

     

    [Cross posted over at the Creative Commons site]

     

  • Thoughts on Corporate Usage of User Generated Content

    Sumant Srivathsan and I have a running conversation over corporations soliciting advertising and content generation by customers without compensation and we’ve written up our thoughts on this and the current TATA Docomo campaign.

    [Gautam] I firmly subscribe to what Cory Doctrow has said, that an artist’s greatest worry should be obscurity and not piracy or, in this case, a lack of suitable monetary compensation. While on the subject of monetary compensation, I also believe that attention is a valuable commodity in its own right that can be monetised and is a very important element in any modern day business model.

    [Sumant] I don’t have a problem with crowdsourcing content in general, and I concur that attention helps to increase the value of future work. However, what it creates is a never-ending spiral that consistently under-rewards creative work, but tacking on the “future reward” rider. I have a problem with this, because it allows the buyer to artificially create a position of strength and dictate the value of products that are not part of the current transaction.

    Our current discussion is over TATA Docomo’s Celebration campaign where they encourage people to submit creative entries for Diwali as animations, wall papers and screen savers.

    The relevant terms are below:

    Rules for CREATE CELEBRATIONS:

    12. The winning entry(s) will become a copyright of TTSL and TTSL will have all rights to use the winning entry (s) in whichever forums, media and frequency it deems appropriate, without any financial or any other obligation whatsoever.

    http://twitpic.com/2wqggm

    Terms & Conditions of the CREATE CELEBRATIONS:

    2. Participant acknowledges that all the copyrights rights and intellectual property rights of an entry(s), post its submission would entirely and solely vest with Tata Teleservices Limited (“TTSL”) only. By submitting the entry(s) for the Contest, the participant agrees to assign and transfer all copyrights of the submitted entry to TTSL without any consideration and compensation.

    http://twitpic.com/2wqgli

    In short, all entries and the copyrights in and to them belong to TATA Docomo and they will pay the winners in each category. However, the Rules for the CREATE CELEBRATIONS seem to suggest that only the winning entries become the copyright of Tata Docomo but the Terms & Conditions seem to suggest that all submissions and entries become the copyright of Tata Docomo.

    [Gautam] I am not a fan of TATA Docomo asking for an effective assignment of copyright to them and a model I would recommend is that they use a more permissive license, such as a Creative Commons license, to allow the artists to keep copyright and be able to share it on their own terms.

    [Sumant] TATA DoCoMo should not be in a position to dictate what an artist does with future work, which may not exist, because that should be entirely the artist’s prerogative. I suppose any artist who feels that way would demand a higher price for the work, and therefore not enter into a contract based on TATA DoCoMo’s terms. Insofar as their terms and conditions are clearly stated, I have no problem with TATA DoCoMo’s offer to contributors.

    I suppose TATA DoCoMo is in a unique situation because they are allowing use of their trademarks for use in the UGC. I still think that the ideal outcome ought to be a sharing of rights, and not outright transfer to TATA DoCoMo. I’m not happy with the prizes being offered, which is a fraction of the cost of the video being done by an agency, as this devalues the contributor’s work, and I’m not happy with the attribution. It says “this is an idea of so-and-so” whereas the contribution is completed animation/artwork. That is misrepresentation.

    [Gautam] However, at the end of the day, it’s a free world and a free market and if TATA Docomo has many takers, as I am sure they will, they do so knowing full well of the terms of contract. Caveat Emptor and all.

    [Sumant] I do like the deviantART and Threadless/Inkfruit models of crowdsourcing much more. In both cases, attention is a given. In addition, there is a clearly stipulated revenue-sharing system in both of these models that allows artists not only to monetize the attention they gain, but also the work itself, to its maximum. Also, in both cases, the artist retains all rights to the content they have created, but are shared under a licensing agreement. dA uses a CC license, but I’m not sure about Threadless or Inkfruit.

    UPDATE I:

    TATA Docomo have now changed their Rules to state

    12. Participant acknowledges that all the copyrights rights and intellectual property rights of an entry(s), post its submission would entirely and solely vest with Tata Teleservices Limited (“TTSL”) only. By submitting the entry(s) for the Contest, the participant agrees to assign and transfer all copyrights of the submitted entry to TTSL without any consideration and compensation.

    It’s clear now that all submissions and not just winning entries have their Copyright assigned to TATA Docomo.

    UPDATE II:

    Nandita Saikia has written a post well worth reading on the legal elements of contract and copyright involved in such cases and why such clauses might not hold up under legal scrutiny.

    What are the legalities involved in not only announcing that one owns the property of another person but in doing so while being factually accurate? In fact, going a step further, would it be legally possible to effectively decide that one owns the property of another person without paying that person anything?

    One context in which such questions could arise is in the context of User Generated Content (UGC) which is submitted to websites. Such sites which solicit/accept submissions from users often have, somewhere in the fineprint of their Terms, clauses which state that ‘(1) all submissions shall be the exclusive property of the website, (2) the submitting-artist/user (hereinafter referred to as the “artist”) shall have no right to his/her submission/”work” or therein or to any profits which may be derived therefrom, and (3) the website may deal with the work in any manner it chooses without any reference to the artist, and without making any payment to the artist’. Some websites may acknowledge that the work once belonged to the artist and draft clauses in the form of an “assignment” of copyr
    ight or “transfer” of the work.

    Read her full piece over on the Indian Copyright blog.

     

     

  • Thoughts at the Intersection of Publishing and Technology in India

    (With thanks to Atul Chitnis, Sankarshan, Abhaya Agarwal and Sachin Malhan for their inputs.)

    The Indian publishing space is both vibrant and flourishing with estimates suggesting that there are sixteen thousand publishers who publish, for the most part in printed form, some eighty thousand works a year. What this statistic does not show is how varied the market is across multiple metrics. Linguistically, as a country, India has twenty two languages listed in its Constitution and are considered ‘official languages, the 2001 census lists twenty nine languages as having more than one million native speakers. From a technological viewpoint, India has over six hundred and fifty million mobile phone subscribers and over eighty million internet subscribers while at the same time a recent survey indicated that eighty four percent of rural India are unaware of the Internet. And yet, India is the fifth largest viewer of the English Wikipedia and ninety four percent of Indian traffic to Wikipedia is to the English version of Wikipedia while the second largest language group for India after English is Hindi with just over half a percent of the total. These statistics indicate both large challenges and equally large opportunities for both domestic and international publishers in India.

    The Indian publishing market has only just begun to enter its digital age in its deployment of technology and in the development of business models that technology enables. It is also curious to see the marked absence of big international players in India given the remarkable opportunities that this market offers and this gap in the market has been filled by numerous local players who now have a marked first-mover advantage. Looking at this space, one can separate the publishing market in to three elements, of content creation, of content distribution and of overarching publishing business models and there are several interesting established and emerging models within India that are technology centric in their approach and worth exploring.

    India’s linguistic diversity is mirrored in the works so published and one particularly onerous problem is an abject lack of diverse “print ready” Indic OpenType fonts that are Unicode compliant and this problem is further exacerbated by the lack of adoption of Unicode in publishing work-flows. Such lack of standards compliant fonts are a serious drawback to technological attempts at publishing existing works in new mediums as it adds to the cost and time of conversion. A further issue is that optical character recognition of Indic scripts is far more complex than it is in Romanic languages because of the multiple variables involved. Both the Government and private sector have sought to address these challenges – The Indian Institute of Science, based in Bangalore, is part of the Million Book Project and has an ambitious goal of digitizing two hundred and fifty million pages of mostly out of copyright books in multiple Indian languages and have scanned over eighty million pages already. Their stated aim is two fold, to ensure access to books that might otherwise have not been possible and to develop optical character recognition and language processing tools better suited to Indic languages. There have also been private successes at digitizing content and multiple Indian publishers work with Google Books to digitize and make available their content to varying degrees. The Government has also worked to create Indic OpenType fonts that are Unicode compliant but there are open issues as to licensing and they have not yet seen widespread adoption.

    Given the prevalent challenges around creation of standards compliant content, compounded by low internet penetration rates and by the relatively weak purchasing power of Indian consumers, it is not surprising to note the eBooks have not yet had widespread adoption and will not see widespread adoption in the immediate future. Amazon’s Kindle, while officially available in India, does not support Indic languages natively and this drawback, along with its high price point, have been hindrances to it’s adoption. Similarly, Apple’s iPad is not officially available in India and has a similar barrier to entry – an extremely high price. However, the domestic marketplace has birthed competitors who have addressed these two weak-points. Infibeam’s Pi and EC Media’s Wink eBook readers seek to offer viable alternatives to international offerings and they distinguish themselves as being cheaper alternatives and that they support major Indic languages natively. That said, they do not have a selection of titles, in English, that is in any way a threat to Amazon’s supremacy in the English publishing space. However, having earlier noted the challenges of standards compliant Indic content, it is safe to say that these domestic players do not yet have a compelling selection of Indic language content either and apart from the technological limitations, the most important constraints on eBooks and eBook readers in India continues to be high price points for the devices and for the eBooks themselves.

    That said, it is again, the Government that seems keen to play a catalytic role in this space for the stated purpose of using technology as a way to scale content dissemination and interactivity for education. In this context it is also worth observing that while many Government prescribed textbooks are officially available online to download, for free, there remains a lack of digital channels and low-cost hardware devices to distribute them this content. Over the last year, the Government has announced multiple efforts to develop low cost multi-function devices that offer a tablet-like experience at a cost much lower than existing alternatives. Some of these attempts have been met with some well-deserved scepticism in the market but it is safe to say that the Government is intent on pursuing this path and while India has not officially adopted the One Laptop Per Child project, there have been several private initiatives to roll these devices out across limited geographies. Given these developments and the Government’s involvement, a logical opportunity that arises is content for such devices when they are eventually rolled out and there is some hope that this large opportunity will spur the development of Unicode compliant Indic fonts and publishing work-flows.

    A further opportunity lies in the mobile phone – eBook reader hybrid device – while traditional designs eBook reader designs emphasize form and function, the steady march of falling mobile phone prices, increased screen space and enhanced hardware along with an impending roll-out of 3G services across the country afford an opportunity to the nimble publisher to offer eBooks and similar digital content to be read and consumed on these hybrid devices. While these devices may not resemble eBook readers we are currently accustomed to, the peculiarities of the Indian market often forces unique and innovative, or jugaad, solutions. Indeed, it is evident that some players have already spotted this opportunity – Wattpad, a Toronto-based mobile phone e-book developer, announced that it has reached an agreement to release its mobile phone eBook application on Bharti Airtel, India’s largest mobile phone network.

    A small, but growing trend, is that of self-publishing and print-on-demand publishing in India. Two of the earliest entrants in this space were Pothi.com, a Bangalore based start-up, which started operations in 2008 and the company’s aim is to democratise publishing making it accessible to a wider audience using a demand backed publishing platform. They also offer ancillary publishing tools, service
    s and an online store to help sell the books in both printed and digital formats. CinnamonTeal is another such player in the space. Perhaps, having recognized the growing interest in self-expression via the self-publishing route, these start-ups have been joined by Depot, which is the books and stationery retail arm of one of India’s largest retails groups, the Future Group. It is interesting to note that Wattpad has also announced that it plans to work with one of the larger more established international players in the self-publishing and print-on-demand space, Lulu.com.

    Indeed, a number of local publishers are already available on mobile phones and the content has a strong focus on entertainment rather than education and this holds true across mobile phone providers. At the simplest level, Reliance Communications, a mobile phone operator, launched a novel by an award winning author that was packaged and made available as a ninety short-messages or text messages that were delivered over the course of a month. They plan to convert this to an audio book as well that is accessible via a dial in number. At the next level of technological sophistication are options that leverage existing phones with data connections to provide an enhanced reading experience. Tata Docomo provides comics across a wide range – spanning mythological tales to content from Disney via a reader that offers a comparable experience similar to Amazon’s Kindle and is priced at INR 20 per comic book and is valid for one year. A company called Nazara Technologies is bringing Archie comics in Indic languages to two mobile phone operators and is in talks with other operators too. Amar Chitra Katha, one of the oldest comic book series in India that is still being published, is also available via Vodafone’s network and offers value added content based on these comics and stories such as being able to listen to folk stories, download ring tones, wallpapers and even games. Then there are publishers who have used technological platforms that are not yet widespread in India but have done so with an international market in mind. Tulika Books, for example, a children’s book publishers, have multilingual, interactive eBook applications for the iPhone and iPad while Amar Chitra Katha, that are available domestically via Vodafone, are also available on the iTunes App Store.

    With rising levels of disposable income and a growing number of people online, online retail of books and allied printed works have grown quite dramatically in India. The absence of global heavyweights such as Amazon and Barnes and Nobles, while arguable evidence of a still-nascent market, has given rise to numerous domestic players in this space – from traditional brick-and-mortar stores that have expanded to the eCommerce space, such as Landmark On the Net and the Oxford Bookstore, to players who are only present online such as Indiaplaza, A1Books and FlipKart. FlipKart, in particular, has built a very strong and loyal customer base that rivals traditional brick-and-mortar stores.

    While several Indian publishers have embraced technology to expand channels of distribution, and this trend will continue given the data points on costs of devices and wireless connectivity, there are also some rather exciting new business models that are emerging in the Indian market.

    ACK Media, which holds the rights to Amar Chitra Katha comics that were previously discussed, are at the forefront of this evolving business models for publishers. They seek to provide “high-quality entertainment and educational experiences for children of all ages” and are now present across multiple platforms. They are, as has already been seen, present on a range of mobile phone platforms and have further diversified in to home videos, magazines, audio-books, TV and film and even game platforms with the launch of a game called Legend of Katha, a massively multi-player online role-playing game. They have been able to reinvent and re-purpose content forms and formats to expand their traditional offering to both existing and new consumers.

    Panther Publishers have traditionally published print media for the health, pharmaceutical and medical education markets and they have evolved an interesting, yet niche, model that supplements their printed works with multimedia discs that offer interactive content about diseases and other medical conditions while ensuring high levels of quality and accuracy. This model does not supplant traditional printed media that they publish but enhances it with content and methods of exploration that print does not allow for and is a value addition to their catalogue of printed books.

    Pratham Books, a non-profit children’s book publisher with a mission of “A Book in Every Child’s Hand” has evolved a social publishing model that includes strong social media elements to build a community around their cause and employs Creative Commons licenses (or what are referred to as participatory licenses) to allow this community to reuse and re-mix their content. This community led model has allowed Pratham Books to side-step existing constraints and enabled them to scale the co-creation of high quality, low-cost children’s content and created new channels of distribution for a multi-lingual and multi-cultural market. While strengthening their fundamental mission, this model has also catalysed the creation of multiple derivative works ranging from iPad and iPhone applications, to creating content for OLPC laptops, to creating entirely new books from existing illustrations, to creating community translations of their books and to creating versions of their books for the print impaired – from DAISY and Braille books to rich audio books.

    However, an important element of this gradual shift and evolution from the traditional paper-print model to digitally enabled models is the legal protection that the Copyright regime offers in India. An amendment to the Indian Copyright Act, currently pending approval, introduces a few key elements to digital content spaces – amongst the many changes it contemplates three worth highlighting are that it seeks to introduce the notion of “Rights Management Information” and “Technological Protection Measures” or Digital Rights Management to the Indian legal context.

    Aside from these two, an important exception being contemplated is to the benefit of those “persons suffering from a visual, aural or other disability that prevents their enjoyment of such works in their normal format.” This is important because India alone has close to 80 million persons with print-impairment i.e. those who cannot ‘read’ content that is printed in books or visually represented on the screen and hence the need for content in formats that is accessible to the print-impaired – usually audio, or text that can be read out by software. Unfortunately, there is far too little ‘accessible’ content in such formats, and efforts to expand the universe of such content are too few and far between. Government and non-profit and non-governmental initiatives to create or aggregate accessible content attract attention disproportionate to impact. This clause, while still debated as to its method and scope of operation, is seen as being able to spur and support large scale efforts to create accessible catalogues of content. While the print-impaired are under-served it also offers the possibility of a unique marketplace and is an area where organizations such as Inclusive Planet operate to offer community, library, content sharing and accessible solutions specific to this demographic.

    Extrapolating current trends in the pace of technological improvements, falling prices of content consumption devices and increasing access to low-cost, reliable high speed wireless networks offers conditions favourable to publishers to expand channels of distribution and consumption of content beyond what print has traditionally offered and lays the foundation for new business models that use technology to deliver increased value and scale. It is also worth noting that infrastructure to support many of these processes, including varied publishing services, content conversion and allied services, already exists in India even if not targeted at the local market. Local companies, such as FlipLog, are already leveraging local content, talent and partners to target markets both domestic and international across multiple technology platforms.

    It is easy to see the tremendous opportunities that the Indian market offers to publishers across the spectrum and it is important to remember that while Indian might not yet be a technologically advanced market in the way that Western markets have developed there remains immense opportunity for technological innovations in content creation, delivery and consumption as long as the technology is appropriate for existing conditions and offers value in a price sensitive market.