Community Trust and Crowdfunding

When I saw that the majority of this week’s readings themed around folksonomies, I was excited! I’m doing a presentation on folksonomy for my Digital Libraries course. I started to write my post, and then I realized that I needed to challenge myself by writing about something that I wasn’t doing for another course… so I decided to tackle the article about crowdfunding. Everyone knows a little something about crowdfunding – remember that weird potato salad Kickstarter that went viral? – but I didn’t know much about it in terms of professional libraries. Let’s dig in…

crowdfunding
Throw money at an idea until it works! Seems simple enough. (Retrieved from http://www.edisonawards.com/news/crowdfunding)

In her article “Crowdfunding: What’s In It For Us Librarians?,” Hope Leman writes about how libraries can use crowdfunding as a tool to involve their community and build a good reputation. She speaks about projects she’s crowdfunded and how the site works in general, and then delves into library projects funded on Kickstarter by outlining their themes and how successful they were. She takes examples of projects she’s funded for libraries and suggests that they need to rewrite their content so it captures the attention of potential donors. To conclude her paper, she writes:

The age of crowdfunding is upon us, and opportunities abound for savvy librarians and information scientists. There are risks—if you fail, you fail publicly, and your Kickstarter page stays up whether you make your goal or not. Moreover, it takes time to develop and market a crowdfunding campaign. Nevertheless, a well-conducted, successful campaign could generate goodwill in your community and convey the image that your library is a with-it, happening place.

Leman urges libraries to consider crowdfunding as an option for future projects. So, let’s consider it, shall we?

Argument: Libraries should use crowd-funding. 

Support: Successfully funded library projects that built community interest.

In Leman’s paper, she discusses a few library projects that caught her attention as a Kickstarter backer. LibraryBox 2.0 was her main example of a successful Kickstarter. Jason Giffey had an initial goal of 3,000 and was overfunded to a total of $33,139. Two years later, LibraryBox goes on sale for $150. All in all, it was a very successful Kickstarter that was met with praise. Best of all, Jason Giffey was a librarian. This is doable for library professionals, and he proved it.

Libraries often face budget cuts. All library professional knows this. The Huffington Post even has a tag dedicated to it. For many public libraries, it’s all they can do to stay open and keep books in circulation. If they have an idea for a project that can keep the community involved – like the Literary Lots project to engage children in Cleveland – they may not have enough money in their budget to do so. Turning to the community, then, is a great idea.

Community outreach for funding is perfect if your project serves the community. You can’t create a Kickstarter for, say, a new coffee machine in your office. But if you have an idea for something that could turn more interest to your organization in your community, then crowdfunding would be an effective way to do so. You would build trust in your community by following up on your promises as well as showing initiative to create new projects.

Counter-Argument: Libraries should use more private means of funding. 

Support: A multitude of failed projects that have caused backlash. 

Despite the success of the LibraryBox 2.0 Kickstarter, Jason Griffey still faced a multitude of issues. When his project grew closer to its goal, funders began to pull their pledges, assuming that he no longer needed their money. She even quotes a tweet of his where he admits that the real “secret” to pulling off a successful Kickstarter is to have a multitude of connections. According to his site, the success of LibraryBox 2.0 is thanks to both Kickstarter and a grant from the Knight Foundation.

If you’re considering launching a public crowdfunding campaign, you need to already have a good social media presence. Unfortunately, not many libraries have adjusted to social media in a way that goes beyond an informative tweet once a week. Though this is better than no presence at all, it’s not often engaging for the community. If you want your community to donate money to your cause, you need to to know that you can count on people rallying behind your organization.

Let’s say you do think you have a good potential donor pool. That’s great! But you need to really understand how much money your project will need. This is true for any money request, especially professional grants (you’ll probably get turned down if you haven’t outlined your expenses clearly). Grant foundations often understand that estimates can often be off. Kickstarter backers, however, might not be so forgiving.

Consider that a Kickstarter does not just require getting the money raised. You need the money for the project, plus money for the incentives that you will ship to donors. I have seen several successfully-funded Kickstarters that then failed fantastically. The people who donated to the project begin to lose faith in their company, and will take to Twitter and Facebook to openly complain about not receiving their promised rewards. Examples of Kickstarters that failed in this way include The Coolest Cooler, Yogscast, Leelah ProjectGeode from iCache, and many others.

Almost all crowdfunding sites require that you provide some level of rewards. Not only do you need an itemized list of how the money you raise will successfully shape your project, you also need to set aside money for rewards. These rewards should be inventive. For example, if you were going to raise money to create a local film archives, then you might offer someone who donated $100 a chance to get their films digitized, or perhaps someone who donated $50 could attend a donor’s dinner. Rewards can add up quickly – the Mystery Science Theater 3000 Kickstarter had 27% of their goal dedicated to the costs of shipping and creating rewards.

If you don’t have all the estimates down, then you might end up with a funded project that only barely covers the cost of your actual goals – or maybe doesn’t cover it at all. First, you need to send out the rewards as soon as possible, and then see what you have left. If you don’t, then backers will get angry, and will label you as a scam. They are not very forgiving. Using a more private means of getting funded – like a grant – is a near guarantee that people will not label your organization as a failure or as incompetent. This is very important. You need to consider how your organization would handle a failed Kickstarter, and how you would deal with the backlash.

Discussion

Any project proposal that requires IT can get messy. In class last week, we discussed the failure rate of government IT Projects. Only 5% of large federal IT Projects in the last decade fully succeeded. 41% were complete failures that canceled before they were even turned on. This is the government we’re talking about. They have much more money for their projects than most libraries will get for their crowdfunded projects. However, they only have themselves to answer to. If you use crowdfunding, you might have a large community turn your backs on you.

Crowdfunding, if successful, can strengthen your ties to your community. It’s worth considering if you believe that your project is something that people are interested in and would be willing to support. However, if you’re not absolutely sure about how to complete a project goal, then it might be better to turn to other sources.

Advertisements

Is Web 3.0 Already Here?: A Question of Semantics

In the last article we read for class today, Matt Crosslin discussed the “future” of the Internet — Web 3.0, or the Semantic Web. This quote caught my attention:

Some software development companies are creating SL clients that have integrated web browsers inside of them. Clicking on an object in SL with an embedded web link will bring up a transparent web browser next to the user’s SL avatar (3-D virtual representation of a person). Depending on the future direction of the Internet, this could be what surfing the Web could become. Surfers will have a virtual avatar that explores 3-D islands and cities online, and then displays more information about various organizations, companies, schools, and individuals by bringing up a website within the virtual world.

futurama1
Bender, from Futurama’s episode about the Internet, says, “Behold; the Internet! “(In the next shot: Fry says, “My God, it’s full of ads!”) Image retrieved from fynewnewyork.com.

In the Futurama episode “A Bicyclops Built for Two,”the characters enter the Internet via a virtual reality machine. Their minds are uploaded to the internet and a realistic avatar is created for them to physically browse the Internet as if it is a large, sprawling city. They even play games online in a way that mimics physical activity like Laser Tag.

While Futurama’s image of the Internet is over the top and entirely unreasonable, Crosslin’s idea that the Semantic Web will allow us to represent ourselves online and search in new and unprecedented ways is much more believable. It outlines some sensible goals. His article was written in 2011, which is 5 years ago now. A lot can happen to the Web in five years. Let’s see if this idea for the future of the Internet has come to pass . . .

Argument: The Semantic Web is a current reality. 

Supporting Evidence: Interconnectedness of current technology. 

When most think of the future of technology, they think of a truly interactive experience. Like in the Futurama episode, most imagine an Internet that comes as naturally as anything else in the physical world, easily integrated into their current lives. It enhances our physical interactions as well as makes online interactions more realistic and informed. In many ways, this additive technology is already here.

Consider IFTTT (If This, Then That), a web-based service which allows users to create “recipes” to make their applications execute certain commands under the written conditions. In a Mic.Com article on Web 3.0, Greg Muender provides several examples of these conditional statements, including the a command to start the hot tub at home if several meetings are scheduled for one day.

Recipes through IFTTT rely on several things. One, that web-based applications can follow intricate conditioned commands. Two, that web-based applications can communicate with each other (via bluetooth and wireless Internet sharing capabilities). And three, that users can write their own codes.

One of the goals for the Semantic Web as outlined by Crosslin is the ability for users to contribute Application Programming Interface, or API. Essentially, users should be able to personalize their web-based technology for their own needs. IFTTT does not require a deep understanding of code, but still allows users to write their own commands, which fulfills this idea of an executable web experience. Not only do users read and write as established by Web 1.0 and 2.0, users can interact directly with their web-based devices in a meaningful way.

Predictive information is another huge facet of Semantic Web theory. In order to better integrate with daily lives, the Web should know what you want before you know you want it. In some ways, directed advertising fulfills this condition, but there are other more interesting ways the current Internet is individually tailored, via predicted text. Google Now is described as an “intelligent personal assistant“that answers questions, makes recommendations, and performs actions without the user needing to ask it beforehand. Google itself already makes predicative suggestions based on your area. If you search for Mexican restaurants, for example, it will show you restaurants near you.

I have a feeling this predictiveness is in other technology, too. I haven’t found a way to prove it quite yet, but every time I pull down on my Apple homepage to use the search function at the gym, the online radio I use is in the first suggestion, as if it remembers I commonly use this app at that location. It makes for a smoothly integrated gym experience – I don’t have to type in “Slacker Radio”. A truly predictive system would have already started the app as I walked indoors, but that’s a task for the future.

An even simpler technology that augments your reality is Push for Pizza, an app that takes away the decision making process of ordering a pizza and narrows it down to one simple question of whether the user wants cheese or pepperoni. Streamlining processes is an important aspect of the Semantic Web. Users should feel like the Internet is working for them, not that they have to make it do what they want.

Evidence against my argument: We Won’t Know Web 3.0 Until It’s Here 

While Crosslin acknowledges that the lines between Web 1.0 and 2.0 are blurred, many argue that the jump to Web 3.0 will be so drastic that we won’t be able to ignore it. The problem with searching for arguments against Web 3.0 is that the Internet has been accessible to a large number of users for well over 20 years. This means that very old articles are still readable, such as this 2007 article against Web 3.0, which calls the very idea of the semantic web “destructive,” and this 2009 article that says Web 3.0 is “nonsense.”

It is easy to see why articles from the last decade would be distrustful of the promises of the Semantic Web. After all, we were promised flying cars in science fiction, and here we are in 2016, still tied to the ground. (I remember reading a Discovery magazine as an elementary student that said all households would have a robot by 2015. I suppose a roomba counts.)

The criticism surrounding Web 3.0 often derides the idea for its hype. Focusing on what the future might hold detracts from current technological advancements, says Tim O’Reilly. These interactive technologies are just Web 2.0 coming into its own. Trying to find evidence of a future that is fantastical doesn’t encourage people to look for new ways to innovate outside of these proposed ideas – we need more technology than just ways to make sure the kettle is boiling tea by the time we get home.

Discussion

I feel like there are parts of Web 2.0 that are bleeding into the ideas of the Semantic Web. Moving away from the term “Web 3.0” is more agreeable for most technologically inclined researchers. If you take away this idea that Web 3.0 is a clearly defined event and more of a process of technological advancements known as the Semantic Web, I feel that more people would be willing to believe that we are well into the “future” of the Internet.

The scramble to achieve Semantic Web ignores the larger implications of this technology as well. If the Semantic Web hopes that technologies can “communicate” with each other and interpret data, then that means there’s a potential for technology to decide not to work. Without human intervention, the Semantic Web could spin out of control, and we would lose our hold on the situation.

Technology that augments our reality without being predictive is well on the way to existing. Google Glass is a major proponent of this futuristic technology, making interface more easily integrated into our daily lives. Other technology that could lead to this idea of the Semantic Web include Brain-Computer Interface. Though it’s not used for most able-bodied people, BCI is making huge steps in improving computer interaction for paralyzed users. BrainGate allows user to move prosthetics or a computer mouse using only brain waves. We could potentially harvest that technology to augment reality for all users, making it more easily navigable.

Eye-tracking is also something that could potentially be used for more streamlined link-clicking and information seeking.This takes us even closer to Vannevar Bush’s idea of the Memex. There is an application known as “Periscope” which allows users to stream what they’re seeing on their phone, which many use for adventures or to show them getting a tattoo or any number of things. If we could combine this technology of eye-tracking, link-clicking, and Periscope, we could get even closer to the Memex.

But that’s the problem with futuristic ideas like Memex or the Semantic Web. They are meant to inspire us as developers, inventors, and information scientists. Too many people get bogged down in the details into how to make the technology these theories say we should have. But imagine how impressed Bush would be by our Google system, even though it’s not what the Memex put forth! We can continue to advance our technology. We don’t need to worry about if it fits a theory.

Genderization of Wearable Technology

2014-01-10
Fitbit is supposed to encourage us to become more active. How does activity-tracking technology affect our daily lives? (Image retrieved from http://www.npccomic.com/comic/2014/01/10/if-a-tree-falls)

I asked my father for a Fitbit Surge on Christmas. I wanted something that would not only track my activity so that I could better log it on MyFitnessPal.com, but also something that would more accurately estimate calories burned via heart rate monitoring. Steps are not my main goal – in fact, I put my daily step goal to a paltry 5,000 steps every day. I sometimes go to 10k or 15k, but I try to be reasonable. Instead, I focus on the 30 minutes of activity I get at the gym daily, using my Fitbit to keep track of how much I burn. Did it work? Well, since January 1st, I have lost 17 pounds.

Was the Fitbit necessary for that loss? No, of course not. I could surely go to the gym without some plastic, over-priced piece of technology on my arm. But it’s more than a tracker. It’s a constant reminder. Instead of thinking, “Hm, I should probably go to the gym sometime,” you remember every time you glance down at your wrist to check the time.

For today’s blog post, let’s discuss gendered technology.

Argument: Wearable technology is marketed in accordance to gender roles. 

Evidence to Support My Argument: The marketing strategies of Fitbit and Google Glass. 

Let’s talk about gendered wearable technology, for a moment. When you consider the potential user of a Fitbit, what pops into your mind? I won’t speak for you, but I usually imagine a thin, white woman in her late 20s or early 30s. In fact, the huge majority of my friendslist on Fitbit–connected via Facebook–are all women. This is no mistake; the diet and exercise/weight loss industry (and yes, it is an industry, with businesses that rake in almost $6 billion in 2015) is specifically geared toward women.

Presuming you watch TV, consider the last diet commercial you watched. Perhaps it was something like “My Husband Ted,” wherein a wife complains that her husband stopped drinking soda and lost weight, while she didn’t. Consider any commercial that flashes images of their weight-loss champions. I would say that most of them are women, who say taglines like, “I lost three dress sizes!” or “I am the size I was in high school!” or even, “I feel sexy and desirable!” Perhaps even, “Please validate me, now that I have restricted myself for months so that I can be the size that every woman should be according to our patriarchal society! I hope you find me sexy now! PLEASE TELL ME I’M SEXY!”

Oh, excuse that last one. (Was that a little heavy handed? Perhaps…)

Let’s take a look at the Fitbit site. Perusing their images of customers provides you with many, many images of “fit” and “active” women, with maybe a few men sprinkled intermittently. By my count, there are 7 images of women, and 5 images of men. One of those images is a couple, and three of them don’t focus on their face (one is just a hand that I assumed was masculine, and another was a hairy pair of legs).

If you had the time to click through the gallery, perhaps you saw my favorite of these screenshots: Tory Burch for Fitbit. “Make fitness fashionable,” it urges you. The images with men mostly focus on being active, on setting goals (see: the Fitbit Surge image).  But fitness becomes an idea for women, an aesthetic. Athletic chic. Beautiful active wear and a Fitbit that fits in with your other accessories. The Fitbit becomes less a reminder to be active and track your progress, and more a fashion statement.

Let’s compare this with another famous wearable technology: the Google Glass. Try to imagine someone wearing Google glass. Are you having trouble coming up with something? Don’t worry, I can’t remember the last time I saw an actual breathing human being wearing one.

Oh wait, yes I do. Yesterday I got lunch at Noodles to Go by campus (the Thai Hot Pot is very good and I recommend it). There was a man there on a blind date. Three guesses as to his accessory of choice – and the first two don’t count.

Yes, he was wearing a Google Glass on his first date. An odd choice, to be sure. My husband and I shared amused looks, texting each other about whether or not we would date someone who wore Google Glass on a date.

Now, I did some thinking after I read the article on wearable technology last night, and realized I had never seen a woman wearing Google Glass. So I did a little digging, and found this article, “Google Glass’ Women Problems:”

Recently Google co-founder Sergey Brin took to the stage at the TED “Ideas Worth Spreading” conference. His idea worth spreading? That cellphones are “emasculating.” […] With one word Brin appears to have shot in the direction of both feet: both possibly alienating Google’s male Android smartphone customers and offputting women who might otherwise be in the market for Google Glass.

Taylor Burley investigates further into Google’s gendered marketing problem. He finds that in an analysis of the gender makeup of Google+ users who used the #ifihadaGoogleGlass hashtag, an alarming 86% of these users were most likely male, and 14% are female. The same hashtag on Twitter had a similar rate of 80% male, 20% female.

Perhaps the marketing itself isn’t to blame. Google might have seen the gender makeup of people responding to Google Glass technology and realized that they should appeal to the majority of their market – the men – by playing up the masculinity angle. Should they be blamed for this? Perhaps not. But if Google Glass is marketing itself as the technology of the future, then it shouldn’t simply focus on male potential buyers.

(Side note – before we get into evidence against my argument, I’m just going to show you the first result on Google when I typed in ‘women’s Google Glass’. It’s a video detailing a woman’s day by recording it via Glass. It doesn’t end well.)

Evidence Against my Argument: Marketing strategies of music-making wearable technology.

In one of the articles we read for class, the author mentions a Kickstarter for an amazing musical invention called the “Machina Midi.” It’s a jacket that a musician can wear to perform music by moving around and having present sounds for each movement. It’s a very cool concept, one that actually had me gasp in surprise at how awesome the opening video advertised the product.

I decided to look into more music-making wearable technology once I noticed that the jacket didn’t seem to be necessarily gendered. I found a post that listed many musical technologies, and perused their products. Each of these products are definitely worth checking out.

One thing I noticed is that there seemed to be a pretty good balance between the representation of male and female users. Imogen Heap, an amazing artist I’ve been a fan of for many years, markets her musical gloves, the MiMu. It doesn’t focus on the gender of the user. It instead focuses on the product that this technology creates: sound, music.

wearable-technology-1024x809
Even without gendering wearable technology, many wonder if wearable technology is too specific, too “out there”. (Image retrieved from davewalker.cc)

Discussion

Perhaps the gendered marketing of wearable technology depends on its purpose. If the technology is meant to supplement your daily lives – such as the Fitbit or GoogleGlass – then its marketing appeals to those who would already use it. For instance, marketing already implores women often take daily walks or strive to be active, so a Fitbit helps keep them on their goals. GoogleGlass, however, is more focused on the technology aspect, and technology innovations in general tend to be marketed toward men (as the popular idea is that men dominate the IT industry).

Wearable technology that is meant for full-day use is gendered just as our fashion is gendered. Fitbit has special bands to look like bracelets because the company understands that women probably won’t wear something all day if it’s ugly or sticks out and looks tacky with relation to the rest of their outfit.

However, musical wearable technology has a purpose. It is not meant for all-day use. It is focused on a product, a creation – whether it be songs or a light show. Why gender something that already has a specific market – musicians? It doesn’t make sense for that particular product.

Despite the fact that many wearable technologies consider themselves to be the products of the future, all these technologies are part of a business, and these businesses are driven by profit margins. As terrible as it may be, I doubt that gendered marketing of wearable technology will dissipate if it’s meant for daily use. If it’s what’s making money for these businesses, they’ll keep doing it until it no longer makes money. (Or, perhaps, take a bold shot by going against gender norms and making that their market by embracing LGBT marketing strategies – but “pink money” is an entirely different issue.)

Perhaps all this comes down on this idea that many companies might not understand the usefulness of their product. With wearable technology being put forth as the technology of tomorrow, more companies feel pressured to release it. Consider the AppleWatch, or any other smart watch. Why would we need a smart watch, when it doesn’t take that much effort to take your phone out of your pocket? Because it looks cool, mostly (which will refer you to aesthetics, which will put you right back at gendered marketing). In an interesting article, Mark Thomas implores businesses to critically consider what their technology actually brings to the table:

So, we’ve established that there are numerous devices with endless capabilities, but how does any of this truly have a positive impact on our lives? Carefully managed, these devices and the data they collect can give users greater power to detect and act upon many different aspects of their lives, from banking and finance through to home security and automation, and, in the health and wellbeing arena, from early health warnings through to customised fitness programmes tailored to the individual. I’m a keen follower of physics and chemistry, and can see that wearable tech has huge potential in detecting early signs of low blood-sugar levels, or reminding people to take medications based on signals from physiological sensors, as examples.

If businesses were to focus on the real benefits of wearable technology and how it might help the average person instead of how wearable technology plays into gendered fashion expectations, then we might get rid of this gendered marketing issue altogether. That, however, would require businesses to think more critically about what they really want their technology to do, rather than just look cool or sit pretty on your wrist.

Information Accessibility & Upward Mobility

upwardmobility
The idea that people can “climb the ladder to success” often ignores that the ones on top make it difficult to follow them. (Image from: http://www.economist.com/news/leaders/21571417-how-prevent-virtuous-meritocracy-entrenching-itself-top-repairing-rungs)

Technology can save the world! Can’t it?

The idea that technology is able to drastically change economical circumstances around the world is the running theme throughout the required readings for today. Many of these readings reek of White Savior complexes, especially the start-ups mentioned in Charles Kenny’s “Can Silicon Valley Save the World?” article. Charles Kenny writes of several start-ups meant to ‘fix’ impoverished countries, all of which began with the idea that a simple invention can solve the problems of entire countries. He speaks of a claim made by broadband companies that access is linked to an increase in GDP, outright ignoring the fact that China has many impoverished citizens while also having some of the most ubiquitous broadband access in the world. Another great point that he makes is that accessibility to the internet means nothing in countries like Liberia where the literacy rate is very low.

Some most egregious examples of failed western inventions meant to fix poverty abroad include a soccer light ball called “Soccket” that costs 10 times more than an effective solar-powered lamp while also requiring the ball to be played with before the light will work (because, you know, all African children love soccer and need to play more than they just need working lights). PlayPumps, a water pump backed by influentials like AOL and Laura Bush, cost four times what a regular working water pump does. It was also prone to breaking and required 27 hours of “play time” in order to meet the water needs of the community. Because, again, African children love to play, more than they love accessibility to water.

I’m digressing here. I suppose this desperate need that (mostly white) westerners feel to fix the rest of the world doesn’t sit well with me, especially since we so often ignore that most of the structural issues within these countries come from our direct involvement in these countries via colonialism. (See: “How Europe Underdeveloped Africa” by Walter Rodney. I’ll even give you a link to a pdf! Click.)

Of course, my initial cynicism and distrust does not do well to comment on how technology has served to help improve society around the world. In Kenny’s article, he lists many successful companies that have helped countries via accessible medicine.

The very idea of social informatics is that society affects technology and vice versa. Consider the political cartoon at the beginning of this article. American culture pushes this idea that disenfranchised citizens can become successful, when truthfully the wealthiest only seem to get wealthier as the poor become poorer. This economic divide, and its continuing widening, is reflected within information science. The digital divide, as defined in the Kerry Dobransky article as “a gap […] within and between societies in the degree to which different groups have access to and use information and communications technologies,” widens as technology helps facilitate communication.

Consider the situations in Africa I referenced with Libera’s failing literacy rate. Kenny described a push to give laptops to African children, despite evidence that shows laptops do not significantly further education. What good is a laptop if children can’t read? What good is accessibility to this technology – when gifted – if literacy isn’t accessible as well? Technology, though it has given us unprecedented access to information, causes this gap to widen. Poor, uneducated people do not have access to the technology nor the ability to use it, while the wealthy and educated gain more and more knowledge and benefit from these technologies at skyrocketing rates.

Of course, there is another side to this argument, as there always is. Sure, giving technology blindly does not solve poverty just as putting a bandaid on a broken dam would not stop a flood. However, reasonable expectations can be met by making information technology more accessible to disenfranchised persons. Consider public libraries – they strive to meet the needs of even the poorest of the community. My father, who grew up in poor, rural Mississippi, told me that he would spend hours reading books at the library on any subject he could get his hands on – and today he’s a successful orthopedic surgeon.

Yes, rich people do benefit from information technology at rates much higher than poor people. However, this does not mean that this technology is at all wasted on disenfranchised members of the community. It can still help them. It is when people have extremely high expectations of technology when it does less to help and more to hurt a community. If you make exorbitant claims that a iPhone app is going to solve world hunger, then of course your app will fail. Technology can help. We just need to be realistic about it.

Evolve or Die: Necessity of Innovation

I’ll have to admit that the idea of innovation is not something that interests me. I have never been suited for business and I am not the type of person who goes about trying to change things. I am stubborn, unable to move in my old ways. Of course, I will try out new and improved ways of doing things. But I am not creative in that way; I will not be the one doing the innovation.

The concept of innovation, therefore, is somewhat daunting. I think of imposing businessmen at long tables with a stammering, hopeful innovator standing at one end, about to pitch a new idea. It’s never appealed to me, which is precisely why I’ve never pursued any sort of business venture: it’s simply not for me.

However, all of this week’s readings revolve around innovation. For a while, I tried to see if there was a subject I could cherrypick from these articles in the hopes I wouldn’t have to pursue the idea of innovation as a blog post. Alas, this was not the case, and I am stuck here with a little imp called innovation jeering at me while I stare blankly at these pages.

My cautious approach to innovation is the idea of evolving in order to adapt to a increasingly technological society. After much consideration, I have smashed “innovation” and “library science” into one argument: a library must evolve to survive, or it will surely perish.

stein-library
A political cartoon about libraries from 1996. That’s 20 (!) years ago. (Retrieved from: http://edsteinink.com/2009/05/08/oldie-but-goodie/)

The modern teenager would have you believe that libraries are not useful anymore. After all, with a simple Google search, you can find all the information for a research paper you need, right? A visit to Wikipedia, and blammo, you’re done!

Wow, it’s so simple. I really wish someone had taught me that in undergrad! And here I was, spending hours vetting articles through JSTOR, Muse, or EBSCO. So silly–everyone knows that no one can lie on the internet.

Let’s take a little side trip here to show you a humorous example of lying on the internet, from this very amusing Tumblr post. According to this post, Tumblr user hullaballoons created a website about 6 years prior with falsified “facts” about President James Buchanan. Though it was not an official site whatsoever, many misguided and eager Googlers typed in “james buchanan facts” and stumbled across her site. It wasn’t just shared on social media; an author of a presidential trivia book took these entirely untrue statements as fact and published them.

Because, of course, nothing on the Internet can be a lie. You have to be qualified to post here. (Just don’t look at a comment section on YouTube anytime soon.)

Of course, anyone with any sense in their brains knows that there is a lot of false information floating out there on the Internet. If you were to browse books in a library on a certain subject, you can be sure that, most of the time, the books therein contain accurate information that you could use for a paper or presentation. But as more students procrastinate, few want to browse the bookshelves when they believe the Internet can give them the instant answers they need.

So librarians adapt. We give lectures to classes of incoming Freshman on how to search for accurate information. After all, that’s a major part our job – to be able to search for information for patrons so that their research needs are fulfilled. So, when they’re either too pressed for time or too nervous to approach us, we take the process to them. I remember I often hated sitting in those lectures during early English or History courses where a librarian would show us how to use a database, but I know that if I hadn’t known those search methods, there were many papers I would have never finished on time.

There are many who agree that library outreach is a necessary innovation. In an article on Public Libraries Online, Carolyn Anthony asserts that libraries must remain relevant by adapting to modern user’s needs:

Accepting the need for constant innovation will require that public libraries adopt a disciplined approach to turning outward toward the community to understand how the library can adapt to people’s changing lifestyles and patterns. It will also require that public libraries hire people who are creative, analytical, and social to engage with community residents, and to form partnerships with agencies staffed with workers having diverse skills who can work with us to help the community achieve its aspirations.

A cornerstone of library science theory is that libraries exist to suit users’ needs. What use is a library that sits pretty but isn’t accessible or worthwhile? Though it is an important repository for human knowledge, it also serves a greater purpose. When our users’ needs changes, so must our methods.

Perhaps the idea that libraries are outdated and backwards is a public relations issue. In an article on Shareable, Cat Johnson insists that libraries are evolving–and have been for quite some time. Within it, she cites evidence that librarians have embraced Internet since its widespread use, and it certainly makes sense once you stop to think about it. As information science professionals, why wouldn’t librarians learn to use online resources? For some reason, however, librarians have this bad rap of being obsessed with print and stuck in their old ways.

It’s hard to find any literature that argues against libraries, believe it or not. I was sure that I might find some sort of ultra-conservative rant about libraries being a waste of taxpayer money, but I was pressed to find such arguments. Instead, I found several rewarding arguments that posed the question, “Do we need libraries?” answered with resounding “YES!”s all around. Until, that is, I found one simple blog post from 21st Century Library:

. . . There is no adequate answer – yet.

Why is that? Why are we – the profession – unable to answer that fundamental question? Is it because there is no single answer that satisfies everyone? Is it because the answer is too big for non-librarians to understand? Is it because it is the wrong question that has no correct answer? Yes. Yes. and Absolutely!

When we look at what the library is evolving into today – the 21st Century Library – we can easily answer the “Why…?” question with a “We don’t.” answer. BECAUSE that tired old question is asking why we need the “classic” library, the 19th Century Library, the “collection of books” library, the librarian as “gate keeper” library. And the correct answer is WE DON’T!

We absolutely DO NOT need that tired old stereotype library with the bunned, shushing librarian guarding a dusty collection of “books.” Society has no use for those obsolete libraries and librarians of the past that were adequate for the society of the past.

I won’t share the entire post, though it’s certainly worth a read. Kimberly Matthews’s post cries out for a call to arms for libraries to adapt to the 21st century–and for users to see that libraries have embraced modern technology. Though the librarian stereotype of an old, angry woman “shushing” a patron from behind a stack of dusty books may ever prevail, we never need to stoop so low as to fulfill that social prophecy.

Libraries are better than that. Our users DESERVE better than that. And that is why innovation is an essential part of the future of library science.

Introduction to Social Informatics

aftercomputeruse
This is a cartoon showing the social effects of computer usage. Seems about right to me. (Retrieved from: http://www.tomandmaria.com/i202)

Hello and welcome to my social informatics blog! Here, I will create regular blog entries based on assignment requirements throughout the Spring 2016 semester.

For our first post, we are to write about our understanding of social informatics after reading the following articles:

  • Zhang, P., & Benjamin, R. I. (2007). Understanding information related fields: A conceptual framework. J. Am. Soc. Inf. Sci. Journal of the American Society for Information Science and Technology, 58(13), 1934-1947.
  • Sawyer, S., & Rosenbaum, H. (2000). Social Informatics in the Information Sciences: Current Activities and Emerging Directions. Informing Science, 3(2), 89-95.
  • Fletcher, C. (2006). Appreciating context in social informatics: from the outside in, and the inside out [Workshop handout]. ASIS&T Annual Meeting, Austin, TX.

Before I write about these individual articles, I want to begin with my understanding of social informatics from our first introductory lecture. As far as I understand it, social informatics is the study of social consequences of information and communication technologies. The field studies how technologies impact our everyday interactions outside of technology as well as how technology suits our socialization needs. One statement that stuck out to me from that first lecture was the assertion that most of our online interactions stem from our social desire to gossip – which I find incredibly hilarious, as I can assure you is entirely true after participating in years of sorority meetings. But it’s more than just about how we use the internet to talk or how we text each other. It also studies information divide – how race, gender, and class can affect accessibility.

Now, after reading these articles, let’s see if my understanding is now more nuanced.

Zhang’s article on information related fields focuses on the Information Model, or I-Model, which “asserts that information with the help of technology can provide capabilities to people and to society in various domains and context.” It works to help define common characteristics between different areas of information studies, with four fundamental components: Information, Technology, People, and Organization/Society. Later, he introduces two additional components for consideration: domain and context. “Context” caught my attention, because I remember talking about it during our first lecture, where we defined different types of context: social, cultural, political, economic, technological, institutional, and individual. These contexts are important for how we interpret information. Within this article, context is defined as ” specific settings, circumstances, or conditions in which studies are conducted or practices are carried out.” I hadn’t considered “domain” before this article, which is essentially the background information, such as a subject discipline or field of study.

I found this idea of domain and context confusing, until Zhang gave an example: “in a study on information technology (IT) use in emergency room operations, IT can be quite low fidelity (a white board). People concerned in the study are doctors and nurses who have knowledge and expertise not only about the application domain (medical) but also about the particular context (the emergency room where they work).” Domain and context are not truly separate ideas, but rather they are both terms which help define the environment in which an information study takes place. These fundamental components mentioned within the article are key to social informatics, as they show how information works within a social environment.

This article is quite useful, especially given the multiple references to Human-Computer Interaction, or HCI, which is another course I’m taking this semester. I’m interested to see how many times my HCI and Social Informatics course will coincide.

The second article more specifically addresses social informatics, specifically with the World Wide Web and the emerging digital assets such as digital libraries, electronic commerce, and distance education. The beginning of this article focuses on the history of the field, asserting that studies on social impact of computer usage has been studied for about 25 years. I have to wonder why this field is associated with information sciences and not psychology or sociology. It seems interesting to me that studies on computer usage–which, in some cases in countries like South Korea, China, Japan, and yes, even America have led to computer addiction–are associated with this field and not more closely with psychology. Perhaps it is because studies from the psychological or sociological field tend to focus more on abstract studies of the persons involved and not with the technology involved, which is more closely related to our field. Though social informatics studies societal impact and consequences of information and communication technologies, it seems that there is an emphasis on technology (as in, how certain kinds of technology lead to different social impacts, or how it facilitates communication).

I felt satisfied when I saw that Sawyer and Rosenbaum addressed this question in their article, stating that “Researchers in fields as varied as computer science, information science, communications, sociology, anthropology, information systems, management science, education, and library science have been investigating the ways in which ICTs and the people who design, manage, and use them shape and influence each other in different social contexts.” It’s a multidisciplinary field, that can be approached from several different “domains” (to give a nod back to the Zhang article).

Other interesting points I noticed in this article includes a snippet about pushing connection to the internet in education settings without people considering how the internet truly facilitates learning. This article was published in 2000, when I was 8 years old. Around fourth or fifth grade, I believe – that is, between 2001 and 2002, the elementary school I was in did away with their program for advanced students known as Excel. In the years before, advanced students would meet during recess to learn additional teachings in science and math. We would meet with a teacher who would give us science projects or logic problems. It stimulated our intellect and gave us a way to interact with each other. However, they decided to do away with this program and instead replace it with a computer program. I refused to participate in Excel after that. There was no camaraderie or group connection to be made with other gifted students if we spent the whole time staring at a computer screen. We had no instructor for the activities – just someone who observed. I believe that Sawyer and Rosenbaum pointed out a disturbing trend in schools with this article: the pushing of technologies to stay ahead, while instructors do not keep up with this technology or understand how to still feed a student’s desire for interpersonal communication and interaction.

It isn’t just communication that suffers from turning education into an online or computer activity. Budget suffers as well. In high school, there was a push for this technology known as “smart boards,” where you could use these “markers” to interact with a computer from this projector. Hardly any teacher knew how to use it correctly. Those who did knew how to use it considered the technology too expensive to actually let students use. I remember it only being used once for actual “whiteboard”-like activities when a teacher used the markers to circle mistakes on a sample essay. Most of the time, the expensive, interactive technolgy was used as a glorified projector, showing movies or Powerpoint presentations.

The most interesting part of this article was at the end, where there was a list of statements the authors considered to be a summary of findings within Organizational and Social Informatics research. One statement I found quite intriguing was the idea that ICTs are not value neutral; there are winners and losers. We discussed this briefly during our introductory class. I’m interested to learn more about this. I remember that the question was once posed in a college ethics class about whether or not inventions or technology could be good or evil. Though many of the other statements on the list include “use” after the term ICT, this one quite clearly says that it is the technology that is not value-neutral. I wonder what my class would have said about that.

Finally, we come around to Cole’s workshop handout on social informatics. This is the shortest of the required readings, with condensed information. It is a handy guide. I’m tempted to print it out and keep it around for reference. Again, we come back to this idea of context as it relates to the field of social informatics. While Zhang wrote that he would rather avoid the arguing and in-fighting over the term and simply keep his own definition for his article, Cole presents a very brief history of the term from its roots in anthropology to its use in social sciences.

Within his handout, he lists four possible interpretations of the relationship between society and technology:

  • ICTs in social context (Technology in Society).
  • ICTs as part of social context (technology partially social).
  • ICTs have technological and social contexts (technology and society).
  • ICTs are primarily social (technology as social).

He says that there is no unifying theory over this relationship. If I had to choose from this list as to which most closely fit my understanding of social informatics before I read these articles, I would say that the fourth is most agreeable. After all, communication is in the name of the technology. Communication depends on social interaction. You can’t have an ICT without at least two people interacting in some manner. Now, I can see merit to all of these proposed ways of understanding how society and technology interact. Perhaps, I’d lean a little more into the third idea, that there are both technological and social contexts for ICTs.

I found these articles helpful to provide a crash-course introduction to the field. I’m excited to get into specifics as the semester rolls on.