Monday, November 1, 2010

Week 9 Reading Notes:

Week 9 Reading Notes:

Brighton University Resource Kit for Students:

A little anecdote while I wait for the ISO to download: there was a postgraduate pub at the UK university where I did my MSc. It sounds a little snobbish, but a graduate-and-faculty-only pub was a wonderful place to wind down and casually discuss your research. (It never hurts to get second opinions from people working in other areas.) BURKS came up one night while a group of us were discussing various bits of humanistic research, from our involvement in Iraq Body Count Project to studying the roots of inequality in the Caribbean islands. The conversation generally turned to international inequality and then to the idea that information could be the ‘great democratizer.’ Predictably, the global variance present in Internet access came up, and an electro-acoustic music composition PhD candidate I knew brought up BURKS. I’m glad to see that at least one night at the pub has validated itself! I can’t wait to look through it once it finishes downloading (in two hours…).

Survey of XML standards Part 1:

I was immediately struck by the fact that the second English specification of XML is the one that is intended to standardize extensible markup language. I guess it makes sense—I mean French was the language of politics before English took over. There is always a preferred language for a certain field. That said, I wonder whether or not the democratizing potential of Internet technologies justify the use of some kind of auxiliary language, maybe Experanto? It seems that the use of such a language would automatically put everyone on the same page, as very few countries actually utilize Esperanto as a national language…

 I also enjoyed reading that XML is a simplification of SGML. We’ve learned about XML in a few other classes (though this is one of the better explanations of it that I’ve seen), but no one has previously come right out and said that XML is an attempt to streamline or simplify SGML, the parent of HTML. Coming from the field of psychology, where basic analyses of variance (ANOVAs) and analyses of covariance (ANCOVAs) have given way to more complex statistical methodologies like factor analysis and the exponentially more complex structured equation modeling (SEM), I think it’s nice to see progress come in the form of simplification. That said, the more I read, the less it seems that XML is really a simplification. Maybe that’s just because any set of drastic changes made to a standardized procedure already in place necessitates the clarification of every little change, every new nuance?

After reading through this, (and admittedly, I didn’t really attempt what you might call a close textual analysis, haha) I’m not entirely sure that I’m clear on the distinction between XML and XHTML. Can anyone shed some light on this?

Extending your Markup:

I found the examples provided in this tutorial to be very helpful. Learning by example is always easier for me (maybe I’m a visual thinker, or just a little thick--- probably a combination of the two). In any case, the inclusion of the Examples found in the orange boxes was much appreciated.

XML Schema Tutorial:

Like the HTML tutorial by W3 that we saw last week, this site functioned a little like a cup of IT chamomile tea. I definitely bookmarked it, as I’m sure it will come in handy down the line.

4 comments:

  1. John, I will try to elaborate about XML and XHTML (still I am looking for the class lecture on XML to learn more). I am seeing XML is a logical meta-data of the document. XML utilizes Unicode characters HAVE a defined set of standards/rules necessary to encode e-documents. Character data (i.e. tab, return, legal Unicode characters) and markup (i.e. start tags, end tags, empty tags, and etc.) are constituencies of the XML entities, which are comprised either from parsed or unparsed data. XML processor needs to encode objects in UFT-8, UFT-16. It is used to encode documents and serilialize data.

    XHTML emerged after the examination of HTML in the light of XML (XHTML can be regarded as XML application) to become interoperable and extensible. Consequently, XML introduced AN appropriate set of rules and incentives, namespaces, separation of structure and visual presentation, compliance/validation to the standards, and a strict syntax (logical markup, case sensitivity, specs in regards to empty and non-empty elements, and etc). It is said that XHTML can be implemented onto cell phones, mobile devices, and etc, but such implementation has been questioned.

    Thus, XML - meta language (syntax, describe the data), XHTML (an application of XML to HTML; XML is spec language to the application languages such XHTML, RSS) displays document (describe text document on the web through the formatting commands). Combination of XML and XHTML leads to the whole markup language, which can be applied on the semantic web.

    Perhaps, predictive modeling will be able to establish causal or covariance relationships among the markup languages, web browsers, hardware, software, web design, and web developer practices. As a results of that, computer linguists will acquire sufficient knowledge in order to make an informed decision regarding the development of a language that simple, elegant, flexible, yet conform only to one formal compliance and one standard (at p<0.001, of course).

    ReplyDelete
  2. Your question really pushed me to think. Thanks. Pondering a little bit more about the computer language and semantics in terms of their complexity and simplicity, the different approaches to information, different levels of information styles, how information is communicated, means of the information delivery to user, the meaning or better to say, an interpretation of the meaning and differences information have for the web programmer and user, the relationships between different models and methods web developers and web linguists use when writing code, solving the interoperability problem and extensibility, I conclude that these factors contribute to the complexity of the programming language and difficulties, platform adaptation, and constant evolution of the language.

    In my opinion computer programming is changing and adapting. Language, in our case, a computer language is adapting to human needs (delivery of documents, documents complexity, delivery speed, compliance with different browsers, and etc.). We are shaping the computer language in our daily life and expecting all happens quickly, in mili second. However, the computer scientists have been working diligently and continue working to improve code, so, at least we should be grateful for their dedication and not complain so much, but put some effort into learning. Thanks.

    ReplyDelete
  3. Thank you for taking the time to provide such an in-depth response to my question! The idea that computer linguists are working towards the development of a universal language where practices across applications do not differ significantly (p<.001) is intriguing, and it provides a hypothetical destination for the evolution of SGML, HTML, XML and XHMTL. Before you brought this up, I couldn't really see the forward progress inherent in this evolution. Now it seems quite a bit clearer. Thanks again!

    ReplyDelete
  4. John, you are welcome. We all learn from each one and help each one! I am grateful to my friends and former colleagues for their insight and help, since the time spent in the laboratory, coursework, and informal communication have been both educational and culturally enriching experience for me during my grad school. Computer science is fascinating science! Some topics can be difficult and some can be simple. Next time, someone else will teach me something, which I do not know. A digital library is great topic and I look forward for this class!

    ReplyDelete