PySpark - Working with JDBC Sqlite database



Spark supports connectivity to a JDBC database. In this session, we going to see how you connect to a sqlite database.  As of this writting, i am using Spark 2.1.1.

First, have your spark-defaults.conf file setup. in your conf folder.




Next, fire up your pyspark, then run the following script in your REPL.




The return data is a list.  Then you probably want to access the data using :-


>>> a[0].ID
2
>>> a[0].Name
'mark'
>>> a[0].Name



Looping through it :-


 for i in a: print(i.Name)

>> mark
>> eric


That's it.








Comments

Popular posts from this blog

The specified initialization vector (IV) does not match the block size for this algorithm