jopen 6年前

快速开发,快速运行,基于Go工具包。实现基于 Hadoop 的 ETL 和特性抽取工具。


Crunch is optimized to be a big-bang-for-the-buck libary, yet almost every aspect is extensible.

Let's say you have a log of semi-structured and deeply nested JSON. Each line contains a record.

You would like to:

  1. Parse JSON records
  2. Extract fields
  3. Cleanup/process fields
  4. Extract features - run custom code on field values and output the result as new field(s)



// Describe your row  transform := crunch.NewTransformer()  row := crunch.NewRow()  // Use "field_name type". Types are Hive types.  row.FieldWithValue("ev_smp int", "1.0")  // If no type given, assume 'string'  row.FieldWithDefault("ip", "", makeQuery("head.x-forwarded-for"), transform.AsIs)  row.FieldWithDefault("ev_ts", "", makeQuery("action.timestamp"), transform.AsIs)  row.FieldWithDefault("ev_source", "", makeQuery("action.source"), transform.AsIs)  row.Feature("doing ip to location", []string{"country", "city"},    func(r crunch.DataReader, row *crunch.Row)[]string{      // call your "standard" Go code for doing ip2location      return ip2location(row["ip"])    })    // By default, will build a hadoop-compatible streamer process that understands json: (stdin[JSON] to stdout[TSV])  // Also will plug-in Crunch's CLI utility functions (use -help)  crunch.ProcessJson(row)