Pages

Showing posts with label teaching. Show all posts
Showing posts with label teaching. Show all posts

Saturday, September 10, 2016

20 Resources for Teaching Kids How to Program Code

Educators now admit that the past decades of ITC teaching were flawed. Teaching kids to use MS Word or PowerPoint is not empowering them to join the IT revolution. Current thinking is that everyone should know how to code (at least the fundamentals). This will help everyone understand that computer programs arent something magical, that only a select few can create, but a tool anyone can use. To this end more and more ways of teaching kids to code are being created. This article lists "20 Resources for Teaching Kids How to Program & Code".



from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

via Personal Recipe 895909

Read More..

Friday, July 1, 2016

Teaching machines to read between the lines and a new corpus with entity salience annotations



Language understanding systems are largely trained on freely available data, such as the Penn Treebank, perhaps the most widely used linguistic resource ever created. We have previously released lots of linguistic data ourselves, to contribute to the language understanding community as well as encourage further research into these areas.

Now, we’re releasing a new dataset, based on another great resource: the New York Times Annotated Corpus, a set of 1.8 million articles spanning 20 years. 600,000 articles in the NYTimes Corpus have hand-written summaries, and more than 1.5 million of them are tagged with people, places, and organizations mentioned in the article. The Times encourages use of the metadata for all kinds of things, and has set up a forum to discuss related research.

We recently used this corpus to study a topic called “entity salience”. To understand salience, consider: how do you know what a news article or a web page is about? Reading comes pretty easily to people -- we can quickly identify the places or things or people most central to a piece of text. But how might we teach a machine to perform this same task? This problem is a key step towards being able to read and understand an article.

One way to approach the problem is to look for words that appear more often than their ordinary rates. For example, if you see the word “coach” 5 times in a 581 word article, and compare that to the usual frequency of “coach” -- more like 5 in 330,000 words -- you have reason to suspect the article has something to do with coaching. The term “basketball” is even more extreme, appearing 150,000 times more often than usual. This is the idea of the famous TFIDF, long used to index web pages.
Congratulations to Becky Hammon, first female NBA coach! Image via Wikipedia.
Term ratios are a start, but we can do better. Search indexing these days is much more involved, using for example the distances between pairs of words on a page to capture their relatedness. Now, with the Knowledge Graph, we are beginning to think in terms of entities and relations rather than keywords. “Basketball” is more than a string of characters; it is a reference to something in the real word which we already already know quite a bit about.

Background information about entities ought to help us decide which of them are most salient. After all, an article’s author assumes her readers have some general understanding of the world, and probably a bit about sports too. Using background knowledge, we might be able to infer that the WNBA is a salient entity in the Becky Hammon article even though it only appears once.

To encourage research on leveraging background information, we are releasing a large dataset of annotations to accompany the New York Times Annotated Corpus, including resolved Freebase entity IDs and labels indicating which entities are salient. The salience annotations are determined by automatically aligning entities in the document with entities in accompanying human-written abstracts. Details of the salience annotations and some baseline results are described in our recent paper: A New Entity Salience Task with Millions of Training Examples (Jesse Dunietz and Dan Gillick).

Since our entity resolver works better for named entities like WNBA than for nominals like “coach” (this is the notoriously difficult word sense disambiguation problem, which we’ve previously touched on), the annotations are limited to names.

Below is sample output for a document. The first line contains the NYT document ID and the headline; each subsequent line includes an entity index, an indicator for salience, the mention count for this entity in the document as determined by our coreference system, the text of the first mention of the entity, the byte offsets (start and end) for the first mention of the entity, and the resolved Freebase MID.
Features like mention count and document positioning give reasonable salience predictions. But because they only describe what’s explicitly in the document, we expect a system that uses background information to expose what’s implicit could give better results.

Download the data directly from Google Drive, or visit the project home page with more information at our Google Code site. We look forward to seeing what you come up with!
Read More..

Sunday, May 1, 2016

Computer Science Teaching Fellows Starting Up in Charleston SC



Google recently started up an exciting new program to ignite interest in computer science (CS) for K12 kids. Located in our South Carolina data center, the Computer Science Teaching Fellows is a two-year post graduate fellowship for new STEM teachers and CS graduates. The goal is to bring computer science and computational thinking to all children, especially underrepresented minorities and girls, and close the gap between the ever-increasing demand in CS and the inadequate supply. We hope to learn what really works and scale those best practices regionally and then nationally.

The supply of CS majors in the pipeline has been a concern for many years. In 2007, the Computer Science education community was alarmed by the lack of CS majors and enrollments in US colleges and universities.

Source: 2009-2010 CRA Taulbee Survey (http://www.cra.org/resources/)

This prompted the development of several programs and activities to start raising awareness about the demand and opportunities for computer scientists, and to spark the interest of K12 students in CS. For example, the NSF funded curriculum and professional development around the new CS Principles Advanced Placement course. The CSTA published standards for K12 CS and a report on the limited extent to which schools, districts and states provide CS instruction to their students. CS advocacy groups, Computing in the Core and Code.org have played an instrumental role in adding provisions to the reauthorization of the Elementary and Secondary School Act to support CS education. More generally, we have seen innovations in online learning with MOOCs, machine learning to provide personalized learning experiences, and platforms like Khan Academy that allow flipped classrooms.

All of these activities represent a convergence in the CS education space, where existing programs are ready for scale, and technological advancements can support that scale in innovative ways. Our Teaching Fellows will be testing after school programs, classroom curriculum and online CS programs to determine what works and why. They’ll start in the local Charleston area and then spread the best programs and curriculum to South Carolina, Georgia, North Carolina (where we also have large data centers). They are currently preparing programs for the fall semester.

We are very excited about the convergence we are seeing in CS education and the potential to bring many more kids into a field that offers not only great career opportunities but also a shot at really making a difference in the world. We’ll keep you posted on the progress of our Teaching Fellows.


Read More..

Friday, March 4, 2016

Meet Hopscotch the iOS app teaching kids how to program

Ive blogged before about the growing movement to teach young children how to program. Hopscotch is a new iPad app that lets kids drag and drop blocks of code to create their own programs. Kids can make games, stories, animations, interactive art, apps...if they can imagine it, they can build it with Hopscotch. But the important thing about teaching kids to code is not just that theyll have fun but theyll learn problem solving, critical thinking, and the fundamentals of computer programming. Check Hopscotch out its free and you dont have to be a kid to use it.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Turn off or edit this Recipe

Read More..