Saturday, November 20, 2010

Assignment 6

Here's the link to my submission for assignment 6:

my website.

I'm new to this, so if anyone should happen to come across any errors, please let me know! Thanks.

Wednesday, November 17, 2010

Week 11 Comments

Here are my comments for this week:




thanks.

Week 11 Reading Notes


Web Search Engines (Parts 1 & 2)

I thought these articles were very interesting. Rarely does a day go by (particularly these days) where I don’t spend some time involved with a web search engine, whether it’s Google or not. With the advent of mobile technologies and the rapid pace of their development and improvement, I can’t imagine that the time we spend using search engines will lessen. Because of that, I think articles like this are invaluable because they explain in some depth the functionality and background of web search engines—they give use context and a vantage point from which to marvel at the technological achievements that surround us.

Current Developments and Future Trends for the OAI Protocol for Metadata Harvesing

There are several concepts that have emerged across classes this semester as key concepts. Among these, and not the least of which, is the idea of interoperability. In a world where individual users must access multiple systems simultaneously, it’s completely necessary for protocols like OAI-PMH to exist in order for such simultaneous access to be possible. These sorts of protocols are testaments to the organizational mastery of humans. In the relatively short period of time since issues like interoperability/interaccessibility were first raised, we’ve come so far towards permanent solutions.

Deep Web: Surfacing Hidden Value

I love when this kind of thing happens—only last night, Dr. Tomer was talking about the vertical nature of the current Pitt ULS website. It’s not exactly the same thing that’s talked about in this article, but it’s pretty close…

When the authors noted that the deep web accounts for more than 99% of all information in the internet (19Tb on the surface, 7,500Tb in the deep web), I had to wonder just what kind of information this is…  SEC filings? Medical files. Yes. But there’s so much more, too! I thought the list of the 60 largest deep we sites was really interesting, though it’s a shame that the NOAA link doesn’t work. I was also shocked to see mp3.com on the list.

Week 10 Muddiest Point


I’m not sure how to phrase this, but based on the two key attributes of digital libraries provided by Borgman, to what extent can someone consider MySpace Music to be a digital library?

Wednesday, November 3, 2010

Week 9 Comments:

Here are my comments for this week:

http://bds46.blogspot.com/2010/10/reading-notes-week-9.html

http://jonas4444.blogspot.com/2010/11/reading-notes-for-week-9.html

Thanks.

Week 8 Muddiest Point:

Jiepu mentioned that in several instances, there are multiple ways of writing HTML to represent the same thing. For example: using a cite tag to achieve italicized text and using a font tag to achieve italicized text. Depending on the use, is one preferred over the other? Would the use of a cite tag in order to format a citation be more correct than the use of a font tag to achieve italicization, which would denote a citation from the point of view of a website viewer? Thanks.

Monday, November 1, 2010

Week 9 Reading Notes:

Week 9 Reading Notes:

Brighton University Resource Kit for Students:

A little anecdote while I wait for the ISO to download: there was a postgraduate pub at the UK university where I did my MSc. It sounds a little snobbish, but a graduate-and-faculty-only pub was a wonderful place to wind down and casually discuss your research. (It never hurts to get second opinions from people working in other areas.) BURKS came up one night while a group of us were discussing various bits of humanistic research, from our involvement in Iraq Body Count Project to studying the roots of inequality in the Caribbean islands. The conversation generally turned to international inequality and then to the idea that information could be the ‘great democratizer.’ Predictably, the global variance present in Internet access came up, and an electro-acoustic music composition PhD candidate I knew brought up BURKS. I’m glad to see that at least one night at the pub has validated itself! I can’t wait to look through it once it finishes downloading (in two hours…).

Survey of XML standards Part 1:

I was immediately struck by the fact that the second English specification of XML is the one that is intended to standardize extensible markup language. I guess it makes sense—I mean French was the language of politics before English took over. There is always a preferred language for a certain field. That said, I wonder whether or not the democratizing potential of Internet technologies justify the use of some kind of auxiliary language, maybe Experanto? It seems that the use of such a language would automatically put everyone on the same page, as very few countries actually utilize Esperanto as a national language…

 I also enjoyed reading that XML is a simplification of SGML. We’ve learned about XML in a few other classes (though this is one of the better explanations of it that I’ve seen), but no one has previously come right out and said that XML is an attempt to streamline or simplify SGML, the parent of HTML. Coming from the field of psychology, where basic analyses of variance (ANOVAs) and analyses of covariance (ANCOVAs) have given way to more complex statistical methodologies like factor analysis and the exponentially more complex structured equation modeling (SEM), I think it’s nice to see progress come in the form of simplification. That said, the more I read, the less it seems that XML is really a simplification. Maybe that’s just because any set of drastic changes made to a standardized procedure already in place necessitates the clarification of every little change, every new nuance?

After reading through this, (and admittedly, I didn’t really attempt what you might call a close textual analysis, haha) I’m not entirely sure that I’m clear on the distinction between XML and XHTML. Can anyone shed some light on this?

Extending your Markup:

I found the examples provided in this tutorial to be very helpful. Learning by example is always easier for me (maybe I’m a visual thinker, or just a little thick--- probably a combination of the two). In any case, the inclusion of the Examples found in the orange boxes was much appreciated.

XML Schema Tutorial:

Like the HTML tutorial by W3 that we saw last week, this site functioned a little like a cup of IT chamomile tea. I definitely bookmarked it, as I’m sure it will come in handy down the line.

Thursday, October 28, 2010

Assignment 5: Koha

Here's the link to my list:

http://upitt01-staff.kwc.kohalibrary.com/cgi-bin/koha/virtualshelves/shelves.pl?viewshelf=48

My list is called 'Favorite Literature - John Seberger,' and it's just that-- a list of my favorite books (in no particular order). Did anyone else notice that regardless of how you chose to sort your list (author, title, copyright date), the list just shows up in the order in which you created it? Did I miss something?

My username is jss86.

Thanks.

Week 8 Comments:

Here are my comments for this week:

http://feliciaboretzkylis2600.blogspot.com/2010/10/muddiest-point-for-week-8.html

http://christyfic.blogspot.com/2010/10/muddiest-point-for-october-25th-class.html

Thanks.

Week 7 Muddiest Point

I'm pretty new to the tech side of computers. I've certainly used them for most of my life, but even the thought of learning something as simple as HTML is exciting and new to me. Given the speed with which computer-related technology evolves, I'm wondering if HTML is still widely used. If it's been around long enough that I'm finally learning how to use it, does that mean that something else (XML?) has taken over?

Week 8 Reading Notes:


Week 8 Reading Notes:

W3Schools Site

I was really excited to see this site. I have no experience actually using HTML beyond editing a myspace page when I was about 15, and so I was just a little apprehensive about Project 6. This site definitely lightens the load, so to speak.

I have to say that I was pretty surprised at how easy HTML is to use. Whenever I think of any computer language (markup or not), I think of a terrifying compilation of what might as well be squiggly lines or Sanskrit. It’s far more logical than I thought it would be, which is evidenced by the symmetry of tags…

Wired webmonkey cheatsheet:

I had a very similar reaction to this site. I’ve always felt that there’s a pretty large ‘insider/outsider’ gap when it comes to computers—that is, you either know how to use them or you don’t. You either now programming/tagging languages or you don’t, and you necessarily need an expert to teach you how to use them. Kind of like playing jazz. Suddenly, I see that’s not the case.

I can see this really coming in handy for Project 6.

Beyond HTML:

This article gives a fascinating example of the need for organization and standards when developing a product. We’ve read a lot about folksonomies and the idea that systems of organization can develop themselves organically through user-generation of data, but it’s clear that this approach doesn’t work in many situations. As was stated in the article, when all library liaisons created their web content according to only their own standards, the overall site was just a mess…

I also wanted to say that the idea of HTML is the basis for content guides, but it doesn’t define the boundaries of what they can be. It’s entirely up to the creator and the way the creator sees the community that will be using the library guide.

Wednesday, October 20, 2010

Assignment 4: Personal Bibliographic Management Systems (GoogleScholar, Zotero, CiteULike)

CiteULike library link:

http://www.citeulike.org/user/sebergerj

The three topics I chose are: music information retrieval (MIR); value in design (VID); cloud computing.

Fast Track Muddiest Point

My muddiest point for this week is simple, but I still want to ask it. I understand the area-based distinctions between LANs, MANs and WANs, but I'm not sure I understand the physical differences between these types of networks. Is the only difference the area the network covers, or is there a difference in the physicality of the networks?

Wednesday, October 13, 2010

Week 7 Reading Notes



Article # 1:

First of all, it was great to see a David Pogue video. I used to watch his video blog on NYTimes, but I haven’t had the chance in a while. He’s got such a bizarre way of presenting. That said, I always just assumed it wasn’t safe to shop or bank at wireless hotspots, but I never really realized how easy it would be for someone to run a program like Eavesdrop (is that what he said it was?) and steal information from you. How and why are programs like that legal? Are they meant for closed networks so, say, a parent could monitor a child’s internet usage?

Overall, I’d say this article was the best we’ve read for this class. Short, concise sections coupled with clear definitions…

Dismantling ILS Article:

Can anyone with more experience shed some light on where interoperability in libraries stands in 2010? I'm never if an article from 2004 should be considered entirely outdated in this field...

Sergey Brin and Larry Page Video

Wonderful use of information visualization! I was absolutely blown away by the globe graphic… and now I kind of want to work for Google (like everyone else). 

Wednesday, October 6, 2010

Week 6 Comments

Here are my comments for this week:

http://marclis2600.blogspot.com/2010/10/readings.html

http://lostscribe459.blogspot.com/2010/10/week-6-readings-computer-networks.html

Thanks.

Week 5 Muddiest Point

I have no muddiest point this week.

Week 6 Notes


LAN article (Wikipedia)

This was a pleasant enough introduction to local area networks. Anecdotally, it reminded me of the days in high school when a group of my tech-savvy friends would get together for LAN parties. Frankly, I had no idea what they were talking about at the time…

The most interesting aspect of this article for me was the history it provided. As is usually the case for me, I had no idea that this type of technology goes as far back as it does.

Networks article (Wikipedia)

‘Terrestrial microwaves use earth-based transmitter and receiver [sic].’ I love that we’re studying something that necessitates the use of the phrase ‘earth-based.’ I keep waiting to hear the Twilight Zone theme… But seriously, I found the introductory discussion of the various types of wireless technologies to be pretty helpful.

Networks (Youtube video)

Very simply put. Not much to say about it, really. I thought it was summed up a little bit better in the Wikipedia article on networks, but I certainly appreciated the brevity of it.

RFID article

These things are everywhere, and I guess I don’t have anything against a library using them. They’re functional, can be designed to be unobtrusive, and just because a library decides to use them doesn’t necessarily mean that they (or anyone else) will gather information (e.g., your whereabouts, etc.) about you. (That concern seems a little ‘Bladerunner’ to me, anyway, but I suppose that’s the greater concern about RFID.) As with many new technologies, the potential for ‘creepy’ uses is there, but as with any new technologies we just need to have faith that they’ll be used in courteous, respectful (relative to privacy) ways. I say, bring on the RFID— I’ve already got one in my car, why not have them in my library?

Monday, October 4, 2010

Assignment 3: Jing

Here's the link to my Jing video about how to use your affiliation with Pitt to listen to music for free via Music Online:



I apologize for my scratchy voice, but this is the cold and flu season...


Here are the links to the 5 screen captures (on Flickr) created using Jing:







Let me know if there are any issues with the links. Thanks.

Wednesday, September 29, 2010

Thursday, September 23, 2010

Week 4 Reading Notes (Representation and Storage)

As some of you have seen, I made the mistake of posting notes for Week 5... Oops. Here are my notes for Week 4.

Wikipedia article on Compression:

Data compression is a fascinating topic for me. I think it is particularly relevant in the context of some discussions that have occurred in and around LIS2000 regarding the digitization of printed materials. Many of us wonder at the effects that the digitization of books will have on the reader's ability to interact with them, but we've already got a good model for predicting certain effects: .mp3 representation of recorded artifacts. If you go to youtube and search for your favorite song, there's a good chance you'll come up with a file that has a fair amount of 'glassiness' to it, which sounds like a flanger and is usually most noticeable on hihats/cymbals and soprano/alto/high-bari backing vocals. (You usually won't run into this if you watch the official music video for a track, but it's very common among videos or tracks that were posted by amateur enthusiasts.) Near as I can tell, this doesn't bother most people; however, it's just about my biggest pet-peeve. The process of digitization and the use of lossy compression to reduce file size risks the quality of the artifact being digitized. If such quality loss is commonplace and often acceptable in digital music files, it makes sense to me that similar/analogous quality loss will be present in digital print material once it reaches the popularity that audio .mp3s already have... That worries me. To the bibliophile, part of the beauty of the reading experience might have much to do with the intricacies of an antique font, or with the fineness of the paper/binding, or even simply the feel of the page on his/her fingers-- much in the same way that the audiophile might find the most pleasure in a listening experience that is born of a four-tube, analog compressor (different than the type of compression we're talking about in this class) complete with subtle tape hiss in the high midrange and the soft anomalous transients that only occur with analog equipment. The audiophile can still listen to music on .mp3s, and the bibliophile can still read books, but the experience is qualitatively (and possible significantly) changed. It might be more convenient, it might even be necessary, but it's just not as good.

On another note, I found the numeric representation of lossy vs. lossless compression to be a very helpful visual aid, but one that is difficult to envision in terms of more complicated multimedia materials. I also found it particularly interesting to think that the digital representation of an artifact that does not show a pattern cannot be compressed. After having read that, it made perfect sense, but it's not something I'd thought of before.

Data Compression Basics Article:

I think the best point made in this article is that lossy compression preserves information, but not data. In order for this point to make sense, one must assume that the people viewing/listening to an uncompressed file will all necessarily get the same information from it. It's true that the range of human perception can be well represented by a bell curve, with most all of our perception abilities accounted for within the first two standard deviations from the mean. But, from a somewhat pedantic, purist viewpoint, it may well be the case that for some with unusual sensitivity, the most relevant/meaningful information might be conveyed by data lost during the compression process.

Digitizing Pittsburgh:

I think this is an example of a digitization project that was carried out very well. The images appear to be very high quality, and I wasn't surprised to read of the lengths the team went to in order to ensure reliability and interoperability among the differing institutions' metadata.

Youtube and Libraries:

The link on Courseweb wouldn't work for me, so I thought I'd post one that worked: here it is.

I thought the best part of this article is something it implies, as opposed to something it explicitly stated: new media and modes of media dissemination can play a vital role in the future of libraries and the way that patrons can interact with their library, be it a local public library, a university library, etc.. In order for the library as a concept to stay afloat, those of us working in and for them will need to keep abreast of social technologies in order to exploit them effectively.

Tuesday, September 21, 2010

Assignment 2: Flickr & Digitization

Here's the link to my photostream on Flickr.

my Flickr photostream

(Please forgive the blurriness of some of the photos. The resolution is good, I'm just a terrible photographer.)

Week 3 Muddiest Point

While I understand the concept of open source software, I'm not sure I get the economics of it. Does the survival of the open source practice simply rely on a sort of 'programmer altruism?' Are there any models that attempt to explain this phenomenon, or is it a notion akin to 'social authorship'?

Week 5 Reading Notes



Database article on Wikipedia

I found that this reading overlapped nicely with readings for LIS 2005 this past week. In fact, this Wikipedia article also fed into readings I’ve done for Music 2111 (Research and Bibliography). The overlap occurs here: breaking down database structure into external, conceptual and internal levels.

While I was working at a law firm in Chicago, I interacted with a database called Concordance on a daily basis. It was a bland program, a black screen with fields for information input. I seem to recall that there was red text every now and again, too. Up until this point, I’ve always thought of databases in those terms: mostly monochromatic, lifeless windows into a digital world. The database was simply a digital thing that existed solely on a computer. Now, I think I’m beginning to appreciate databases for what they are: physical collections of history (however mundane or forgettable) that exist somewhere beyond those lifeless windows.

In our reading for LIS 2005, we read an argument that databasing has been a central act in modern society from Proust to IMDB. We create databases of all those things in the world that mean something to us, that allow us to blanket our worlds with meaning. Perhaps it’s the conceptual level of databases that allows us to do this?

When discussing the three levels of databases (external, conceptual and internal), the Wikipedia article mentioned that accuracy is reduced for the sake of clarity—outliers are removed, and the database is pure. (As a side note, this also reminded me a lot of readings for MUSIC 2111, Research and Bibliography, in which it was said that the creation of an effective citation structure is necessarily a conceptual structure where the odd entries, those that don’t fit well in real practice are left out until the transition to physicality necessitates their inclusion.) I find that this is generally a common practice in Library Science, and indeed in any science that seeks to treat or explain large-scale phenomenon— superimposing generalized conceptualizations simply makes it easier to perceive of order.

(And now for something completely different.)

Does anyone have any examples of ‘post-relational database models?’ I’m having a hard time with this one…

Setting the Stage (metadata article)

Is anyone else as fascinated with the idea of user-created metadata, such as tags, as I am? I think it’s the democratization of the classification process that attracts me to it so much. There just seems to be such potential for organically, publicly derived classification systems for data! By allowing user-created metadata derived from an open-ended ability to apply adjectives to an object (say, an emotion-related adjective or a temporal adjective, e.g., ‘morning’ to an artifact that is about neither morning nor emotion directly) can potentially shed so much new light onto the ways that information users interact with artifacts of all types from books, to signs, to images and those things represented in images… This process could yield such a font of data for analysis!

Dublin Core Data Model article

What strikes me the most about this article and the DCDM idea is the linguistic barrier it potentially faces with regard to its ‘internationalization’ goal. Even with a drastically limited set of appropriate, agreed-upon modifiers, it seems likely that linguistic barriers will be met.

It’s an old-hat notion that different languages have different words with different connotations for similar concepts. (There is, for example, no word for ‘home’ in French—there is only the word ‘maison,’ which is the equivalent of ‘house.’ Similar concepts, but different connotations entirely.) Given situations like this, it seems that DCDM would require the use of an artificial language like Esperanto, or else would require acceptance of certain linguistic barriers that cannot be crossed short of widespread possession of multilingual capabilities on the part of catalogers and users.

Maybe this is shortsighted on my part? Too nitpicky?

Furthermore, it seems possible (though I’m not making this point as a whole-hearted supporter of it) that such homogenization of classification protocol could serve to diminish the cultural eccentricities we’ve all come to know, love and study as scholarly researchers. Again, maybe this is only an over-simplification on my part.

How do we establish universal classification schemas without overriding distinct cultural schemas? Is this question an over-reaction? 

Wednesday, September 15, 2010

Week 3 Comments

Here are the links to the comments I've posted thus far this week:


Week 3 Reading Notes


Linux

My brother is a techie. Maybe even a computer nerd, but I use that term in the most endearing way possible. Whenever I have problems with my Mac, I call him, and invariably he’ll start talking about Linux. Until now, that’s meant very little to me—he talks so quickly that it’s hard to keep up.

I guess I’ve always been somewhat aware of the idea that different operating systems are best suited to different kinds of users. Despite that passive awareness, I was fascinated to read of the sheer computing power of Linux, the industrial uses of it, as well as the ‘open’ origins of it.

After reading about it, I’m a little bit ashamed that I’ve just stuck to Macs for the past several years. They’re great for the audio applications I use, but there’s not a lot of room to get to know how they work. Using a Mac kind of seems like someone buying a really expensive, racing-oriented automobile, but buying it with an automatic transmission because they can’t drive manual— that is, they’ve got this great machine, but they rely on it to do even the simplest of the mechanical tasks it was built to do.

Given that I’m in an LIS program, I feel a little sad that I’ve missed the boat (up to this point) and failed to familiarize myself with such a customizable and democratic operating system.

Mac OS X

As I said, I use Macs all the time, but I didn’t switch to Macs for the OS. In fact, before I read this article (admittedly a couple times before anything sank in) I hadn’t given much thought to the ‘flash and bang’ of OS X, or even to the underlying functionality of it. These articles were a fine overview (though Singh’s article required a great deal of rereading for me, a person with no computer science background, to comprehend) of the history of Mac OS X. I do, however, wish there was some kind of middle ground between the simplicity of the Wikipedia article and the detail of Singh’s article. In any case, while I still don’t get everything (particularly re: Singh’s article), I think I’m better off than I was before I read them… I think.

Windows

Maybe it’s just me, but I kept picturing a guy in pink Brooks Brothers shorts typing seaside at a resort when I read this article— it came across as little more than a sales pitch. In their own way, each article we read lacked a certain impartiality, but this was over the top. Personally, I found the Wikipedia article on windows to be much more helpful: http://en.wikipedia.org/wiki/Windows 

Monday, September 13, 2010

Week 2 Muddiest Point

What is the digitization process for audio artifacts? How do we get from, say, the master tapes of a Rudy Van Gelder recording of Lee Morgan from the 1950s to a digitally remastered .mp3 version of the same session? What is the conversion process from analog tape to digital file?

Friday, September 10, 2010

Week 2 Reading Notes (Repost)

REPOST: (for the sake of clarity, I’ve reposted my previous Week 2 comments so that they all appear in the same place, not as a post and comment. Apologies for the redundancy.)

Moore’s Law (Wikipedia and Video):
I had never heard of Moore’s Law—in fact, I didn’t even know that Intel had been around since the 60s. Perhaps I shouldn’t so openly admit to my relative ignorance, but because I’d never come across it, Moore’s Law struck me as particularly interesting.

At first I was amazed by the accuracy of Moore’s prediction. That such a relatively exacting mathematical prediction about an industry synonymous with innovation would hold up over four or five decades seemed uncanny. Then the notion of a self-fulfilling prophecy popped into my head. When this notion was also mentioned by editors of the Wikipedia article, I began to consider it more seriously.

I think Moore’s Law is a fine example of the double-edged sword that such predictions constitute. While they are necessary, when possible, in order to help paint a clearer picture of the future directions of industry and research, the picture they paint can itself be limiting. However, used as a heuristic for research and development, Moore’s Law could be seen to negatively impact the speed of innovation.

The two year pace set by Moore’s Law seems to fit very well into the notion of planned obsolescence. If you make a product in 2000 that you know will be made inferior in 2002 and that this trend is likely to continue for X innovative iterations (X*2), why speed up research and development? You’d effectively decrease the span of time (X*Y where Y<2) during which you would likely have a body of consumers active in the cycle of ‘purchase a new product every 2 years.’

I’m not entirely so cynical, but perhaps I’ll wait until 2015 to buy my next iPod. Haha.

Wikipedia Entry: Computers
Regarding the wikipedia entry on computers: I found this article to be helpful, as I'm not well acquainted with the inner workings of computers. For those of you in a similar boat, I highly recommend taking a look at the textbook for this course. It provides a very basic (sometimes comically so) introduction to the various components of computers and other technologies.


The Computer Museum:
As for the computer museum website: I loved this site. I spent some time browsing around the various pages, and the whole time I couldn't get another site out of my head:

www.synthmuseum.com

For those of you enjoy listening to music that involves synthesizers (e.g. Aphex Twin, Emerson Lake & Palmer, The Cure, Radiohead), this is a really cool site that covers the history, development and use of synthesizers.

Tuesday, August 31, 2010

Week 2 Discussion Topics

Per Jiepu's instructions, I'm posting my responses to the discussion topics here on my blog, not on the discussion board as I indicated earlier. Sorry for any confusion...


So, here are my two cents about digitization.

Is digitization worth it?

Short answer: yes.

The fact that digitization allows for articles and academic materials to be made widely available outside of the research libraries that have traditionally held them would suggest that digitization is a worthy endeavor. However, it is probable that the worth of digitization is relative to the particular fields, domains and user populations in question, as well as the needs and preferences of the individuals working in those fields.

As an anecdotal example, I prefer to read most journal articles in the physical form, but for the sake of convenience and the ability to annotate the text this means that I typically search for digitized versions of articles, print them out, and read them in the comfort of my living room with a bebop record playing. (Some might say that this is a waste of paper, though I would go so far as to say that photocopying articles in the library is equally wasteful—I also rarely discard articles after reading them.) However, if the article I’m reading is particularly weighty (let’s say an intricate statistical analysis of a longitudinal study pertaining to relationship between age, personality and musical practice routines in orchestral musicians), it’s likely that I’d find myself reading in a library anyway… In some cases, the convenience of digitization directly benefits my research habits; in others, it has no impact whatsoever.

Certainly it’s not enough to say that sometimes digitization is worth it, and sometimes it’s not. Instead, we must choose avenues of investigation to support or refute our opinions. The question might be scientifically addressed through the accumulation and analysis of data pertaining to the information retrieval/usage of individuals working in different fields (accounting for types of materials used in digital or analog form). Similarly, one could measure/estimate the circulation and citation history of the items in question: if an item has been widely cited in other literature or if an item has been viewed/checked out many times, it is likely that a large number of people would benefit from the digitization of this item. This is certainly over-simplified, but equally as certainly, an empirical approach to this question could help shed some light on digitization's worth.

Furthermore, in creative fields the dissemination of digitized representations of artifacts fosters creativity. If authorship of creative materials (defined as loosely or stringently as you like) in the modern world is to be seen as a partially or wholly social act, the availability of digitized artifacts would allow for novel selections and combinations of extant symbol systems (e.g. harmonic structures in the domain of the Anglo-American popular music tradition). If Steve Reich hadn’t been exposed to Gamelan drumming, American minimalist music wouldn’t exist as we know it. Undoubtedly, as yet unknown authors and composers are currently being exposed to media that would be entirely unavailable to them without digitization, thus increasing the palate of symbol system usages available to them when they ‘paint their masterpiece.’

Finally, in terms of the hypothetical scenario presented by Lee where a one-of-a-kind rare manuscript consisting of 200 folios would be digitized to the detriment of funds available for traditional collection development, I propose the following (naïve) solution: once an artifact is digitized, access is potentially democratized, so why wouldn’t multiple bodies/organizations whose collections would benefit from the inclusion of the digitized artifact all chip in to pay? Think about it as if you were going out to dinner with a group of good friends to celebrate an occasion: eight people go out to eat dinner at a nice restaurant which costs significantly more than if those eight people all ate their meals separately at home or if one hosted a dinner party at home. They don’t need to go out to dinner, but they want to because of the perceived benefits. Assuming that no one ordered the ahi tuna tartar or the prime rib, no one drank seven sapphire martinis, and everyone had an equally good/memorable time, the bill could justifiably be split equally among all eight parties. The cost incurred to make the occasion more memorable by going out to a nice restaurant would be shared equally-- no one party would have to watch their spending that month any more than any of the other parties (relatively speaking). I know it’s naïve, but it’s simple. It’s all in the name of greater access to information, something from which everyone benefits. If everyone benefits from access, shouldn’t everyone help to pay the bill?

Digitization is expensive, how to sustain it? Is working with private companies a good solution? Any problems that we need to be ware for this approach?

I knew a student who worked for Google on the Google Books project sitting in front of a scanner for several hours per week in order to partially subsidize his living costs. We’ll call him Pip. (Why wouldn't we?) While I’m sure it was nice for Pip to find a job that required such little thought, fostered the development of speed reading (as he claimed) and paid relatively well, it always seemed to me such an odd allocation of manpower to pay Pip to 'sit in front of a machine and turn pages,' as he put it.

I’m not an engineer, and I don’t have even the faintest notion of robotics beyond the very basic works of Alan Turing and Daniel C. Dennett, but I’d be willing to bet dollars to donuts that there are many engineers out there who could design machines capable of digitizing books--- maybe machines that would cost less to create and use than employing humans to carry out digitization tasks. Then again, maybe not, but I'm sure there are qualified parties out there willing to give it a try.

Regarding privatization of the digitization process: the employment of private companies to perform digitization tasks could likely further muddy the legal terrain surrounding the issue. If Company A is contracted by Organization B to digitize an artifact, Company A would likely vie for a cut of the revenue generated by fees derived from consumer access to said digitized work. In other words, the digitizing party could potentially be granted some degree of ownership of the works they digitize. If private companies are to be employed to carry out digitization tasks they would have to exhibit no small amount of corporate social responsibility (CSR) in order to perform the tasks in a mindful, generally beneficial way.


“Risk of a crushing domination by America in the definition of the idea that future generations will have of its world.” Is this a valid concern?

This issue is directly addressed by Racine: “Racine said that his predecessor had taken ‘advantage of the programs announced by Google to bring a problem into the public area that was broadly confined to specialists,’ galvanizing French politicians to take action.” (http://www.nytimes.com/2007/10/28/technology/28iht-LIBRARY29.1.8079170.html). Here Racine directly downplayed Jeanneney’s claims, thereby hinting at the hyperbolic, political nature of Jeanneney’s statement, somewhat undercutting its validity.

In any case, it is necessary to respect and preserve the cultural traditions of the world’s diverse societies at the continental, national and local levels. No one society’s cultural artifacts should be deemed as intrinsically more valuable, more worthy of long-term preservation/dissemination than those of any other. Despite this core belief, it is unavoidable that different societies possess different levels of financial ability. Accordingly, the cultural artifacts of certain societies are likely to be digitized at a faster rate than those of other societies; however, digitization of one society’s cultural artifacts does not strip another society of their cultural heritage, nor does it strip their non-digitized artifacts of their worth. The digital market of cultural artifacts might, for a while, be dominated by the artifacts of those societies with more immediate financial ability, but that does not mean that other societies can’t digitize their cultural artifacts later.

Everyone doesn’t show up to a party at the same time, but once they’re there, they don’t leave (at least not if they’re digitized).


Another brief notion to consider: How different would our lives be without Netflix, iTunes, and our countless academic databases? How much more difficult would it be to find a good recipe for Pho or your favorite kind of curry? Would our lives be better or worse or just different?

Monday, August 30, 2010

Week 1: The Muddiest Point

For this week's muddiest point, I'd like to bring up the apparently mysterious transition that occurs when information is synthesized into knowledge. I look forward to reading Losee (1997), which was mentioned in Professor He's PowerPoint presentation, but in the mean time, I wonder if bits of information might effectively be envisioned as buildings blocks, which when combined serve to form a greater structure--  a structure we might call knowledge. It makes sense that contextualized information becomes knowledge, but the necessary context must also be comprised of information, itself contextualized by other bits of information. Therefore, it seems to me to be something magical when information is synthesized by human minds into knowledge-- like informational alchemy. I'd love to hear if people find any articles outside Losee (1997) that deal with this transformation!

Introduction and Week 1 Reading Notes:

Hello!

Before we get down to the nitty gritty (which probably won't be all that nitty or gritty this week-- Jello pudding more than tapioca), let me tell you a little bit about myself. I have a BA in Psychology from Kenyon College and an MSc in Research Methods in Psychology with an emphasis in Music Psychology from Keele University, Staffordshire, UK. I've worked as a professional musician, a clerk at a large corporate law firm and as a waiter and bartender. Inevitably these experiences will inform my future posts and responses...

(As a matter of nuts and bolts, I'd like to mention that full citations of the articles in question are not provided at the end of this post, as we've all read the same ones, so there shouldn't be too much confusion.)

As promised, I'd like to mention my reaction to the articles we read for Week 1 in the context of my previous experience in the field of Psychology. I wasn't sure what to expect from articles published in the field of LIS, though I was somewhat certain they wouldn't exactly resemble the articles one typically sees in the field of Psychology or Music Psychology. This, indeed, turned out to be the case (though to varying degrees depending on the article in question). Therefore, the primary topics to be discussed will be: article formatting, vocabulary used, and citation habits.

1. 2004 Information Trends: Content, Not Containers (OCLC, 2004)

The language used in this article was perhaps the most surprising aspect of it-- 'consumer' frequently popped up. Certainly, it makes sense to see the world of information technology and access to information in terms of information producers and information consumers, but the word 'consumer' has always alluded to marketing in my mind. (It's associated with opinion leaders and influencers, etc.) In an unexpected way, the use of this term was very effective (even if unintentionally) at drawing a permanent link in my mind between the dissemination of information and the financial forces that underly that dissemination.

Probably somewhat more directly in line with the intent of the authors is the notion that the packaging of information, the containers referred to in the title, is in a state of flux. Tinier pieces of information, pieces that might have previously been considered only fragments of a larger artifact, have become commodities. This commodification has and will continue to have an impact on the shape of the information market. We already buy songs separate from records and melodies separate from songs (in the form of ringtones). It'll be interesting to see how far this dissecting trend goes...

2. Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture (Lynch, 1998)

The date of this article struck me immediately. Publishing psychologists, it has been my experience, tend to adhere to a ten-year rule--- that is, if it hasn't been published within the past ten years, it's probably not a particularly interesting article, as the information contained (if of any initial interest) has more than likely already made its way into the canon of relevant articles.

Despite the date, to an LIS newbie like me, it was interesting to see a professional's differentiation between Information Literacy and Information Technology Literacy. In a field that seems riddled with Jargon and alphabet soup, sometimes I guess it's nice to have the basic concepts laid out for you.

3. Lied Library @ four years: technology never stands still (Vaughan, 2004)

I smiled when I saw this one. It's from a peer-reviewed journal (judging from the Received, Revised, Accepted dates provided) and it has an abstract and a body complete with methodology and findings sections. I guess you could say this article was like a comfy couch. However, I was a little wary of reading it after I noticed that it's classified as a case-study, which in the world of psychology usually means that there's some validity (however much or little), but not a great deal of reliability. Accordingly, I was pleasantly surprised to realize how universal the issues of funding, machine maintenance and utilization of space were to all libraries, particularly those on par with Jason Vaughan's library at UNLV.

In terms of the articles citations, I was surprised to see how self-referential the author was. Five out of six of the authors references were to papers he either authored or coauthored. I wonder if this is common practice in LIS? Similarly, I wonder how many people have referenced this article... I suppose that's something to check out after class tonight.


In summary, I was surprised at the breadth of article formatting, authorship and publication. Similarly, I did not expect so many of the references provided by the authors to consists of URLs or self-authored works. I look forward to keeping an eye on these aspects of publication in the field of LIS as we progress through the weeks of LIS 2600. Have a good day!

Time for lunch.