BD2KGenomics / cgcloud

Image and VM management for Jenkins, Spark and Mesos clusters in EC2
Other
22 stars 17 forks source link

UnknownHostException: Name or service not known in spark-shell with spark-box #250

Closed heuermh closed 7 years ago

heuermh commented 7 years ago

Trying to follow the docs here https://github.com/bigdatagenomics/adam/blob/master/docs/source/40_deploying_ADAM.md

cgcloud create complains if I don't specify --vpc and --subnet arguments; not sure if that is relevant.

Could there be some DNS settings or configuration that are incorrect or missing?

(venv) $ cgcloud create --zone ... generic-ubuntu-trusty-box
(venv) $ cgcloud create --zone ... -IT spark-box
(venv) $ cgcloud create-cluster --zone ... spark -c cluster1 -s 2 -t m3.large
(venv) $ cgcloud ssh -c cluster1 spark-master
INFO: Using zone 'us-east-1a' and namespace '/foo/'
INFO: Binding to instance ... 
INFO: ... waiting for instance i-021d ... 
INFO: ... running, waiting for assignment of public IP ... 
INFO: ... assigned, waiting for SSH port ... 
INFO: ... open ... 
INFO: ... instance ready.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-107-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

 System information disabled due to load higher than 2.0

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

25 packages can be updated.
8 updates are security updates.

New release '16.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

Last login: Thu Jan 26 17:46:03 2017 from 34....
sparkbox@ip-10-...:~$ spark-shell
17/01/26 18:01:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/26 18:01:20 INFO spark.SecurityManager: Changing view acls to: sparkbox
17/01/26 18:01:20 INFO spark.SecurityManager: Changing modify acls to: sparkbox
17/01/26 18:01:20 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sparkbox); users with modify permissions: Set(sparkbox)
17/01/26 18:01:21 INFO spark.HttpServer: Starting HTTP Server
17/01/26 18:01:21 INFO server.Server: jetty-8.y.z-SNAPSHOT
17/01/26 18:01:21 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:42254
17/01/26 18:01:21 INFO util.Utils: Successfully started service 'HTTP class server' on port 42254.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.
java.net.UnknownHostException: ip-10-...: ip-10-...: Name or service not known
    at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
    at org.apache.spark.util.Utils$.findLocalInetAddress(Utils.scala:789)
    at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress$lzycompute(Utils.scala:782)
    at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress(Utils.scala:782)
    at org.apache.spark.util.Utils$$anonfun$localHostNameForURI$1.apply(Utils.scala:846)
    at org.apache.spark.util.Utils$$anonfun$localHostNameForURI$1.apply(Utils.scala:846)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.util.Utils$.localHostNameForURI(Utils.scala:846)
    at org.apache.spark.HttpServer.uri(HttpServer.scala:163)
    at org.apache.spark.repl.SparkIMain.classServerUri(SparkIMain.scala:141)
    at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1012)
    at $iwC$$iwC.<init>(<console>:15)
    at $iwC.<init>(<console>:24)
    at <init>(<console>:26)
    at .<init>(<console>:30)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
    at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
    at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
    at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
    at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
    at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: ip-10-...: Name or service not known
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
    at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
    at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
    ... 57 more

java.lang.NullPointerException
    at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367)
    at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
    at $iwC$$iwC.<init>(<console>:15)
    at $iwC.<init>(<console>:24)
    at <init>(<console>:26)
    at .<init>(<console>:30)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
    at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
    at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
    at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
    at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
    at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

<console>:16: error: not found: value sqlContext
         import sqlContext.implicits._
                ^
<console>:16: error: not found: value sqlContext
         import sqlContext.sql
                ^

Should the hostname be in /etc/hosts?

sparkbox@ip-10-...:~ cat /etc/hostname
ip-10-...
sparkbox@ip-10-...:~$ cat /etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
10.... spark-master
heuermh commented 7 years ago

We tried again with a new VPC with two new subnets, a private and a public one (option "VPC with Public and Private Subnets" in AWS console). The subnet used for --subnet was the public one.

$ cgcloud ssh -z us-east-1e -c cluster2 spark-master
INFO: Using zone 'us-east-1e' and namespace '/foo/'
INFO: Binding to instance ... 
INFO: ... waiting for instance i-07b ... 
INFO: ... running, waiting for assignment of public IP ... 
INFO: ... assigned, waiting for SSH port ... 
INFO: ... open ... 
INFO: ... instance ready.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-107-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Thu Jan 26 18:41:37 UTC 2017

  System load:  1.22              Processes:           115
  Usage of /:   20.9% of 9.71GB   Users logged in:     0
  Memory usage: 7%                IP address for eth0: 10.0.0.108
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

25 packages can be updated.
8 updates are security updates.

New release '16.04.1 LTS' available.
Run 'do-release-upgrade' to upgrade to it.

sparkbox@ip-10-...:~$ spark-shell
17/01/26 18:42:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/26 18:42:28 INFO spark.SecurityManager: Changing view acls to: sparkbox
17/01/26 18:42:28 INFO spark.SecurityManager: Changing modify acls to: sparkbox
17/01/26 18:42:28 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sparkbox); users with modify permissions: Set(sparkbox)
17/01/26 18:42:29 INFO spark.HttpServer: Starting HTTP Server
17/01/26 18:42:29 INFO server.Server: jetty-8.y.z-SNAPSHOT
17/01/26 18:42:29 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:35700
17/01/26 18:42:29 INFO util.Utils: Successfully started service 'HTTP class server' on port 35700.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.
17/01/26 18:42:36 INFO spark.SparkContext: Running Spark version 1.6.2
17/01/26 18:42:36 INFO spark.SecurityManager: Changing view acls to: sparkbox
17/01/26 18:42:36 INFO spark.SecurityManager: Changing modify acls to: sparkbox
17/01/26 18:42:36 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sparkbox); users with modify permissions: Set(sparkbox)
17/01/26 18:42:37 INFO util.Utils: Successfully started service 'sparkDriver' on port 48519.
17/01/26 18:42:38 INFO slf4j.Slf4jLogger: Slf4jLogger started
17/01/26 18:42:38 INFO Remoting: Starting remoting
17/01/26 18:42:38 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.0.0.108:45923]
17/01/26 18:42:38 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 45923.
17/01/26 18:42:38 INFO spark.SparkEnv: Registering MapOutputTracker
17/01/26 18:42:38 INFO spark.SparkEnv: Registering BlockManagerMaster
17/01/26 18:42:38 INFO storage.DiskBlockManager: Created local directory at /var/lib/sparkbox/spark/local/blockmgr-1b13486a-a98f-410f-8b41-44a09dda4d0f
17/01/26 18:42:38 INFO storage.MemoryStore: MemoryStore started with capacity 511.1 MB
17/01/26 18:42:38 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/01/26 18:42:39 INFO server.Server: jetty-8.y.z-SNAPSHOT
17/01/26 18:42:39 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
17/01/26 18:42:39 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/01/26 18:42:39 INFO ui.SparkUI: Started SparkUI at http://10.0.0.108:4040
17/01/26 18:42:39 INFO client.AppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
17/01/26 18:42:39 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20170126184239-0000
17/01/26 18:42:39 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 47927.
17/01/26 18:42:39 INFO netty.NettyBlockTransferService: Server created on 47927
17/01/26 18:42:39 INFO storage.BlockManagerMaster: Trying to register BlockManager
17/01/26 18:42:39 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.0.0.108:47927 with 511.1 MB RAM, BlockManagerId(driver, 10.0.0.108, 47927)
17/01/26 18:42:39 INFO storage.BlockManagerMaster: Registered BlockManager
17/01/26 18:42:39 INFO client.AppClient$ClientEndpoint: Executor added: app-20170126184239-0000/0 on worker-20170126184145-10.0.0.244-52551 (10.0.0.244:52551) with 2 cores
17/01/26 18:42:39 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20170126184239-0000/0 on hostPort 10.0.0.244:52551 with 2 cores, 1024.0 MB RAM
17/01/26 18:42:39 INFO client.AppClient$ClientEndpoint: Executor added: app-20170126184239-0000/1 on worker-20170126184208-10.0.0.82-48267 (10.0.0.82:48267) with 2 cores
17/01/26 18:42:39 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20170126184239-0000/1 on hostPort 10.0.0.82:48267 with 2 cores, 1024.0 MB RAM
17/01/26 18:42:39 INFO client.AppClient$ClientEndpoint: Executor updated: app-20170126184239-0000/0 is now RUNNING
17/01/26 18:42:39 INFO client.AppClient$ClientEndpoint: Executor updated: app-20170126184239-0000/1 is now RUNNING
17/01/26 18:42:40 INFO scheduler.EventLoggingListener: Logging events to file:/var/lib/sparkbox/spark/history/app-20170126184239-0000
17/01/26 18:42:40 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/01/26 18:42:40 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
17/01/26 18:42:43 INFO cluster.SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (ip-10-0-0-244.ec2.internal:34705) with ID 0
17/01/26 18:42:43 INFO storage.BlockManagerMasterEndpoint: Registering block manager ip-10-0-0-244.ec2.internal:32949 with 511.1 MB RAM, BlockManagerId(0, ip-10-0-0-244.ec2.internal, 32949)
17/01/26 18:42:43 INFO cluster.SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (ip-10-0-0-82.ec2.internal:34332) with ID 1
17/01/26 18:42:43 INFO storage.BlockManagerMasterEndpoint: Registering block manager ip-10-0-0-82.ec2.internal:36310 with 511.1 MB RAM, BlockManagerId(1, ip-10-0-0-82.ec2.internal, 36310)
17/01/26 18:42:43 INFO hive.HiveContext: Initializing execution hive, version 1.2.1
17/01/26 18:42:43 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
17/01/26 18:42:43 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
17/01/26 18:42:46 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/01/26 18:42:46 INFO metastore.ObjectStore: ObjectStore, initialize called
17/01/26 18:42:46 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
17/01/26 18:42:46 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
17/01/26 18:42:46 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
17/01/26 18:42:47 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
17/01/26 18:42:50 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
17/01/26 18:42:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:42:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:42:53 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:42:53 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:42:53 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/01/26 18:42:53 INFO metastore.ObjectStore: Initialized ObjectStore
17/01/26 18:42:53 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/01/26 18:42:54 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
17/01/26 18:42:54 INFO metastore.HiveMetaStore: Added admin role in metastore
17/01/26 18:42:54 INFO metastore.HiveMetaStore: Added public role in metastore
17/01/26 18:42:54 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
17/01/26 18:42:54 INFO metastore.HiveMetaStore: 0: get_all_databases
17/01/26 18:42:54 INFO HiveMetaStore.audit: ugi=sparkbox    ip=unknown-ip-addr  cmd=get_all_databases   
17/01/26 18:42:54 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
17/01/26 18:42:54 INFO HiveMetaStore.audit: ugi=sparkbox    ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
17/01/26 18:42:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:42:57 INFO session.SessionState: Created HDFS directory: /tmp/hive/sparkbox
17/01/26 18:42:57 INFO session.SessionState: Created local directory: /tmp/sparkbox
17/01/26 18:42:57 INFO session.SessionState: Created local directory: /tmp/35133fb6-d589-49c2-86f8-b9182e651c00_resources
17/01/26 18:42:57 INFO session.SessionState: Created HDFS directory: /tmp/hive/sparkbox/35133fb6-d589-49c2-86f8-b9182e651c00
17/01/26 18:42:57 INFO session.SessionState: Created local directory: /tmp/sparkbox/35133fb6-d589-49c2-86f8-b9182e651c00
17/01/26 18:42:57 INFO session.SessionState: Created HDFS directory: /tmp/hive/sparkbox/35133fb6-d589-49c2-86f8-b9182e651c00/_tmp_space.db
17/01/26 18:42:57 INFO hive.HiveContext: default warehouse location is /user/hive/warehouse
17/01/26 18:42:57 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
17/01/26 18:42:57 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
17/01/26 18:42:57 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
17/01/26 18:42:58 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/01/26 18:42:58 INFO metastore.ObjectStore: ObjectStore, initialize called
17/01/26 18:42:58 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
17/01/26 18:42:58 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
17/01/26 18:42:58 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
17/01/26 18:42:59 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
17/01/26 18:43:02 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
17/01/26 18:43:03 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:43:03 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:43:04 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:43:04 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:43:05 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/01/26 18:43:05 INFO metastore.ObjectStore: Initialized ObjectStore
17/01/26 18:43:05 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/01/26 18:43:05 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
17/01/26 18:43:05 INFO metastore.HiveMetaStore: Added admin role in metastore
17/01/26 18:43:05 INFO metastore.HiveMetaStore: Added public role in metastore
17/01/26 18:43:05 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
17/01/26 18:43:05 INFO metastore.HiveMetaStore: 0: get_all_databases
17/01/26 18:43:05 INFO HiveMetaStore.audit: ugi=sparkbox    ip=unknown-ip-addr  cmd=get_all_databases   
17/01/26 18:43:05 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
17/01/26 18:43:05 INFO HiveMetaStore.audit: ugi=sparkbox    ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
17/01/26 18:43:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
17/01/26 18:43:06 INFO session.SessionState: Created local directory: /tmp/58ea7d8f-8c81-45b3-a6d6-6aacf2a2bc7d_resources
17/01/26 18:43:06 INFO session.SessionState: Created HDFS directory: /tmp/hive/sparkbox/58ea7d8f-8c81-45b3-a6d6-6aacf2a2bc7d
17/01/26 18:43:06 INFO session.SessionState: Created local directory: /tmp/sparkbox/58ea7d8f-8c81-45b3-a6d6-6aacf2a2bc7d
17/01/26 18:43:06 INFO session.SessionState: Created HDFS directory: /tmp/hive/sparkbox/58ea7d8f-8c81-45b3-a6d6-6aacf2a2bc7d/_tmp_space.db
17/01/26 18:43:06 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala>