top of page

Results found for ""

  • Blow | Ed Sheeran | Bruno Mars | Chris Stapleton | Drums Sheet Music

    If you've just started to play drums and you’re looking for easy drum cover for beginners keep reading these blog, here I will list my favorite simple covers for newbies in drumming. Follow me on Youtube Beats Breakdown In this section, I will go over all the different beats and fills used in the song, So, lets start with the first groove which is the core of this song: Its a basic rock groove with open hi-hat being played as quarter notes and snare on 3rd count of it. Bass drum is being played on 1 and "and" of 2. The song utilizes different variations of this beat as the song progresses. This variation just adds a snare at the "4" along with an open hi-hat. The second variation add a bit of ghost notes on snares. Ghost notes have been added on the "and" of 1 and "and" of 4. You can play ruffs or ghosted eighth notes on snare drum depending on your preference. Second beat is again one of the most popular rock beats. The beat employs open hi-hat on quarter notes combined with snare on beat 1 and 3. Bass drum is played on beat 2 and 4. Here the hi-hat has been replaced with crash on 1 and 3. This beat has been played at the end of the song. This utilizes half-notes, with bass drum played with open hi-hat on beat 1 and snare played with open hi-hat on beat 2. Rolls Breakdown Now lets go through all the different rolls used in the song: This is the most frequently used roll in the song. The roll starts at the "and" of 4 of the beat with a ghost note on the snare. The roll is played in the form eighth notes following snare-tom-tom pattern. The image on the left shows the roll as it looks while being played with the beat. This role has been played on the snare drum with 16th notes played on "and" of 1 till 2, ending up with a series of eighth notes on snare. This one is starts at the 4 of previous beat and played totally in eighth notes. It is played in terms of triplets i.e. bass-snare-snare with an open hi-hat played with the bass drum. This one is just a combination of "Roll 1" and "Roll 2" as described above played in the sequence "Roll 2" followed by "Roll 1". The only difference is that the last three eighth notes of the "Roll 2" has been played on toms. This is the toughest roll in the song, played at the end of the guitar solo. In terms of sticking it is just eighth note triplets being played. The speed at which it has been played is what makes it tough. The song ends with this. This one is nothing but the "roll 1" being played four times as the tempo of the song drops. Full sheet music I will be posting a video of myself playing this song on my YouTube channel. Do subscribe to the channel as well as this blog for more drum tutorials.

  • Understanding SparkContext textFile & parallelize method

    Main menu: Spark Scala Tutorial In this blog you will learn, How Spark reads text file or any other external dataset. Referencing a dataset (SparkContext's textfile), SparkContext parallelize method and spark dataset textFile method. As we read in previous post, Apache Spark has mainly three types of objects or you can say data structures (also called Spark APIs) - RDDs, dataframe and datasets. RDD was the primary API when Apache Spark was founded. RDD - Resilient Distributed Dataset Consider you have collection of 100 words and you distribute them across 10 partitions so that each partition has 10 words (more or less). Each partition has a backup so that it can be recovered in case of failure (resilient). Now, this seems very generic. In practical environment data will be distributed in a cluster with thousand of nodes (with backup nodes), and if you want to access the data you need to apply Spark actions which you will learn soon. This type of immutable distributed collection of elements is called RDD. Dataframes This has also similar distribution of elements like RDD but in this case, data is organized into a structure, like a table of relational database. Consider you have distributed collection of [row] type object, like a record distributed across thousand of nodes. You will get more clear picture when we will create dataframe, so don't worry. Datasets Dataset was introduced in late 2016. Do you remember case class which you created in "Just enough Scala for Spark"? Dataset is like the collection of strongly typed such objects, like the following case class Order which has 2 attributes orderNum (Int) and orderItem (String). It was the introduction, so even if you don't understand, thats's fine. You will get more clear picture with practical examples. Question is.. Which data structure you should implement? It totally depends on the business use case which data structure you should implement. For instance, Datasets and RDDs are basically used for unstructured data like streams of media texts, when schema and columnar format of data is not mandatory requirement (like accessing data by column name and any other tabular attributes). Also, RRDs are often used when you want full control over physical distribution of data over thousands of nodes in a cluster. Similarly, Dataframes are often used with Spark SQL when you have structured data and you need schema and columnar format of data maintained throughout the process. Datasets are also used in such scenario where you have unstructured or semi-structured data and you want to run Spark SQL. That being said, we have mainly following methods to load data in Spark. SparkContext's textfile method which results into RDD. SparkContext's parallelize collection, which also results into RDD. Spark read textFile method which results into Dataset. SQLContext read json which results into Dataframe. Spark session read json which results into Dataframe. You can also create these with parquet files, read parquet method. Similarly there are other methods, it's difficult to list all of them but these examples will give you a picture how you can create them. 1. SparkContext textfile [spark.rdd family] Text file RDDs can be created using SparkContext's textfile method. Define SparkConf and SparkContext like we did in earlier post and use SparkContext to read the textfile. I have created a sample text file with text data regarding - Where is Mount Everest? Got the answer from Wikipedia. scala> val dataFile = sc.textFile("/Users/Rajput/Documents/testdata/MountEverest.txt") dataFile: org.apache.spark.rdd.RDD[String] = /Users/Rajput/Documents/testdata/MountEverest.txt MapPartitionsRDD[1] at textFile at :27 File has 9 lines and you can see the first line in above screenshot. Further, you can count the number of words in the file by splitting the text (with space character) and applying count() action. You will learn about transformations like flatMap and action count soon, so don't worry. scala> dataFile.flatMap(line => line.split(" ")).count() res4: Long = 544 Right now the motive is to tell - how you read text file with textFile member of SparkContext family. The resultant is an RDD. Important notes: We can use wildcards characters to read multiple files together ("/file/path/*.txt). It can read compressed files (*.gz), files from HDFS, Amazon S3, Hbase etc. 2. SparkContext parallelize collection [spark.rdd family] This method is used to distribute the collection of same type of elements (in an array, list etc). This distributed dataset can be operated in parallel. // Parallelizing list of strings scala> val distData = sc.parallelize(List("apple","orange","banana","grapes")) distData: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[3] at parallelize at :27 // 4 total elements scala> distData.count() res5: Long = 4 or like these, scala> sc.parallelize(Array("Hello Dataneb! How are you?")) res3: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at :25 scala> sc.parallelize(Array("Hello","Spark","Dataneb","Apache")) res4: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[1] at parallelize at :25 scala> sc.parallelize(List(1 to 10)) res6: org.apache.spark.rdd.RDD[scala.collection.immutable.Range.Inclusive] = ParallelCollectionRDD[2] at parallelize at :25 scala> sc.parallelize(1 to 10) res7: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[3] at parallelize at :25 scala> sc.parallelize(1 to 10 by 2) res8: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[4] at parallelize at :25 You can also see the size of partitions, scala> res8.partitions.size res13: Int = 4 3. Read text file to create Dataset [spark.sql family] You can create dataset from a text file or any other file system like HDFS. Here, you can use default spark session which gets created when you start spark-shell. // creating dataset scala> val distDataset = spark.read.textFile("/Users/Rajput/Documents/testdata/MountEverest.txt") distDataset: org.apache.spark.sql.Dataset[String] = [value: string] // 9 lines scala> distDataset.count() res0: Long = 9 // 544 total word count scala> distDataset.flatMap(line => line.split(" ")).count() res2: Long = 544 // 5 Lines with Everest scala> distDataset.filter(line => line.contains("Everest")).count() res3: Long = 5 Here is the shell screenshot; 4. SQLContext read json to create Dataframe [spark.sql family] You can create dataframes with SQLContext. SQLContext is a type of class in Spark which is like entry point for Spark SQL. // you need to import sql library to create SQLContext scala> import org.apache.spark.sql._ import org.apache.spark.sql._ // telling Spark to use same configuration as Spark context scala> val sqlContext = new SQLContext(sc) sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@40eb85e9 My json file looks like this, [ { "color": "red", "value": "#f00" }, { "color": "green", "value": "#0f0" }, { "color": "blue", "value": "#00f" }, { "color": "cyan", "value": "#0ff" }, { "color": "magenta", "value": "#f0f" }, { "color": "yellow", "value": "#ff0" }, { "color": "black", "value": "#000" } ] // creating dataframe scala> val df = sqlContext.read.json("/Volumes/MYLAB/testdata/multilinecolors.json") df: org.apache.spark.sql.DataFrame = [color: string, value: string] // printing schema of dataframe, like a table scala> df.printSchema() root |-- color: string (nullable = true) |-- value: string (nullable = true) // storing this dataframe into temp table scala> df.registerTempTable("tmpTable") // retrieving data scala> sqlContext.sql("select * from tmpTable").show() +-------+-----+ | color|value| +-------+-----+ | red| #f00| | green| #0f0| | blue| #00f| | cyan| #0ff| |magenta| #f0f| | yellow| #ff0| | black| #000| +-------+-----+ 5. Spark Session to create dataframe [spark.sql family] You can also create dataframe from default spark session which is created when you start the spark-shell. Refer spark-shell blog. scala> spark res14: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@6c9fe061 scala> spark.read.json("/Volumes/MYLAB/testdata/multilinecolors.json") res16: org.apache.spark.sql.DataFrame = [color: string, value: string] scala> res16.show() +-------+-----+ | color|value| +-------+-----+ | red| #f00| | green| #0f0| | blue| #00f| | cyan| #0ff| |magenta| #f0f| | yellow| #ff0| | black| #000| +-------+-----+ scala> res16.printSchema() root |-- color: string (nullable = true) |-- value: string (nullable = true) scala> res16.select("color").show() +-------+ | color| +-------+ | red| | green| | blue| | cyan| |magenta| | yellow| | black| +-------+ scala> res16.filter($"color"==="blue").show() +-----+-----+ |color|value| +-----+-----+ | blue| #00f| +-----+-----+ You can also convert dataframe back to JSON like this, scala> res16.toJSON.show(false) +----------------------------------+ |value | +----------------------------------+ |{"color":"red","value":"#f00"} | |{"color":"green","value":"#0f0"} | |{"color":"blue","value":"#00f"} | |{"color":"cyan","value":"#0ff"} | |{"color":"magenta","value":"#f0f"}| |{"color":"yellow","value":"#ff0"} | |{"color":"black","value":"#000"} | +----------------------------------+ You can also create dataframes from parquet, text files etc. You will learn this soon. That's all guys! If you have any question or suggestion please write in comments section below. Thank you folks. Next: Spark Transformations Navigation menu ​ 1. Apache Spark and Scala Installation 1.1 Spark installation on Windows​ 1.2 Spark installation on Mac 2. Getting Familiar with Scala IDE 2.1 Hello World with Scala IDE​ 3. Spark data structure basics 3.1 Spark RDD Transformations and Actions example 4. Spark Shell 4.1 Starting Spark shell with SparkContext example​ 5. Reading data files in Spark 5.1 SparkContext Parallelize and read textFile method 5.2 Loading JSON file using Spark Scala 5.3 Loading TEXT file using Spark Scala 5.4 How to convert RDD to dataframe? 6. Writing data files in Spark ​6.1 How to write single CSV file in Spark 7. Spark streaming 7.1 Word count example Scala 7.2 Analyzing Twitter texts 8. Sample Big Data Architecture with Apache Spark 9. What's Artificial Intelligence, Machine Learning, Deep Learning, Predictive Analytics, Data Science? 10. Spark Interview Questions and Answers

  • What is SparkContext (Scala)?

    Main menu: Spark Scala Tutorial In this blog you will learn, How to start spark-shell? Understanding Spark-shell. Creating Spark context and spark configuration. Importing SparkContext and SparkConf. Writing simple SparkContext Scala program. Starting Spark-shell If you haven't installed Apache spark on your machine, refer this (Windows | Mac users) for installation steps. Apache Spark installation is very easy and shouldn't take long. Open your terminal and type the command spark-shell to start the shell. Same output, like what we did during Spark installation. $ spark-shell 19/07/27 11:30:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Spark context Web UI available at http://19x.xxx.x.x5:4040 Spark context available as 'sc' (master = local[*], app id = local-1564252213176). Spark session available as 'spark'. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.3.1 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171) Type in expressions to have them evaluated. Type :help for more information. What is Spark-shell? Spark shell is an interactive shell through which you can access Spark APIs. Apache Spark has basically three sets of APIs (Application Program Interface) - RDDs, Datasets and DataFrames that allow developers to access the data and run various functions across four different languages - Java, Scala, Python and R. Don't worry, I will explain RDDs, Datasets and DataFrames shortly. Easy right? But.. I need to explain few facts before we proceed further. Refer the screen shot shown below. We usually ignore the fact that there is lot of information in this output. 1. First line of the Spark output is showing us a warning that it's unable to load native-hadoop library and it will use builtin-java classes where applicable. It's because I haven't installed hadoop libraries (which is fine..), and wherever applicable Spark will use built-in java classes. Output: 19/07/27 11:30:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to "WARN". My point here is not the warning, but the WARN log level. Spark has various logging level which you can set while writing the program for example WARN, ALL, DEBUG, ERROR, INFO, FATAL, TRACE, TRACE_INT, OFF. By default Spark logging level is set to "WARN". 2. Next line is telling us how to adjust the logging level from default WARN to a newLevel. We will learn this later, how to run this piece of code sc.setLogLevel(newLevel). Its syntactically little different in various languages Scala, R, Java and Python. Output: To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 3. Next line is telling us the link for Spark UI, sometimes called as DAG scheduler. You can copy-paste that link in your local browser to open Spark user interface. By default, it runs at port number 4040. It would look like this. 4. Next line is telling us that SparkContext is created as "sc" and by default it's going to use all the local resources in order to execute the program master = local [*] with application id as local-1564252213176. Output: Spark context available as 'sc' (master = local[*], app id = local-1564252213176). 5. Spark session is created as 'spark'. We will see what is Spark session soon. 6. This line is telling us the Spark version, currently mine is 2.3.1. 7. We all know Java is needed to run Apache Spark, and same we did during installation. We installed Java first and then we installed Apache Spark. Here, the line is telling us the underlying Scala 2.11.8 and Java version 1.8.0_171. Output: Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171) 8. You can run :help command for more information. Like this, Well, it's again a new story and I will write in detail how to use these commands soon. However, I have highlighted few common commands - like how can you see history of your commands and edit it, how you can quit spark-shell. Initializing Spark In last section we encountered few terms like Spark context (by default started as "sc") and Spark session (by default started as "spark"). If you run these commands one-by-one you will find the default setup and alphanumeric pointer locations (like @778c2e7c) to these Spark objects. It will be different on various machines, yours will be different from mine. For instance, scala> sc res0: org.apache.spark.SparkContext = org.apache.spark.SparkContext@778c2e7c scala> spark res1: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@16ccd2bc What is SparkContext? The first thing you do in Spark program is that you setup Spark context object. Why the first thing? This is because you need to tell Spark engine - How to run and what to run? It's like before ordering/ or buying a pizza, you need to tell whether you want a veg pizza or a non-veg pizza and the toppings ;). Spark context performs two major tasks (via Spark configuration - SparkConf ). It's not like these are the only two tasks but these are basic ones. First setMaster, it tells Spark engine how to run i.e. whether it should run in cluster mode (master) or local mode (local). We will see how to setup master i.e. Yarn, Mesos or Kubernetes cluster and standalone local mode shortly. Second setAppName, what to run i.e. the application name. So, basically Spark context tells Spark engine which application will run in which mode? How to Setup SparkContext? In order to define SparkContext, you need to configure it which is done via SparkConf. You need to tell Spark engine the application name and the run mode. 1. For this, we need to import two Spark classes, without these Spark will never understand our inputs. scala> import org.apache.spark.SparkContext import org.apache.spark.SparkContext scala> import org.apache.spark.SparkConf import org.apache.spark.SparkConf 2. Next, define configuration variable conf, first pass "Sample Application" name via setAppName method and second define the mode with setMaster method. I have setup conf to local mode with all [*] resources. scala> val conf = new SparkConf().setAppName("Sample Application").setMaster("local[*]") conf: org.apache.spark.SparkConf = org.apache.spark.SparkConf@c0013b8 You can see location (@c0013b8) of my configuration object. Spark engine can run either in standalone mode or cluster mode at one time, so at any given point of time you will have just one SparkContext. Confused? Wait I will explain soon. Try to create new SparkContext with above configuration. scala> new SparkContext(conf) You will get the error telling - one Spark context is already running. If you want to update SparkContext you need to stop() the default Spark context i.e. "sc" and re-define the Spark context with new configuration. I hope you all understood what does it mean when I said one active Spark context. Here is the complete reference from Apache documentation, what you can pass while setting up setMaster. Well, instead of doing all of above configuration. You can also change default SparkContext "sc" which we saw earlier. For this you need to pass the inputs with spark-shell command before you start the spark shell. $ spark-shell --master local[2] 19/07/27 14:33:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Spark context Web UI available at http://19x.xxx.0.15:4040 Spark context available as 'sc' (master = local[2], app id = local-1564263216688). Spark session available as 'spark'. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.3.1 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171) Type in expressions to have them evaluated. Type :help for more information. Default setup was to utilize all local[*] cores (refer the output of first spark-shell command at the start of this post), now you can see it has changed to use local[2] cores. Creating SparkContext in Scala IDE example You can write similar program in Eclipse Scala IDE and run the sample application as follows. See How to run Scala IDE Copy-paste lines from here. package com.dataneb.spark import org.apache.spark.SparkContext import org.apache.spark.SparkConf object scExample { val conf = new SparkConf().setAppName("Sample Application").setMaster("local[4]") val sc = new SparkContext(conf) def main (args:Array[String]): Unit = { print("stopping sparkConext \n") sc.stop() } } Thats all guys! Please comment if you have any question regarding this post in comments section below. Thank you! Next: SparkContext Parallelize Navigation menu ​ 1. Apache Spark and Scala Installation 1.1 Spark installation on Windows​ 1.2 Spark installation on Mac 2. Getting Familiar with Scala IDE 2.1 Hello World with Scala IDE​ 3. Spark data structure basics 3.1 Spark RDD Transformations and Actions example 4. Spark Shell 4.1 Starting Spark shell with SparkContext example​ 5. Reading data files in Spark 5.1 SparkContext Parallelize and read textFile method 5.2 Loading JSON file using Spark Scala 5.3 Loading TEXT file using Spark Scala 5.4 How to convert RDD to dataframe? 6. Writing data files in Spark ​6.1 How to write single CSV file in Spark 7. Spark streaming 7.1 Word count example Scala 7.2 Analyzing Twitter texts 8. Sample Big Data Architecture with Apache Spark 9. What's Artificial Intelligence, Machine Learning, Deep Learning, Predictive Analytics, Data Science? 10. Spark Interview Questions and Answers

  • Evanescence | Bring me to Life | Drum Sheet Music

    Easy Drum Covers for Beginners | Evanescence bring me to life drum sheet music If you've just started to play drums and you’re looking for easy drum cover for beginners keep reading these blog, here I will list my favorite simple covers for newbies in drumming. Evanescence Bring me to Life Drum Cover Follow me on Youtube The song starts with a simple 4/4 groove spanning 2 bars with eighth notes on the hi-hat and snare hit on 2 and 4 and ghost notes on snare on “a” of 2 and “e” of 3. It ends with an open hi-hat at the “and” of 4 in the second bar. Here is how the groove looks like. This brings us to the first verse of the song. The groove here looks like this: You can play the eight notes mentioned either on a ride cymbal or a crash cymbal. In the actual song it has been played on a ride cymbal. This same grouping has been played twice in the verse. Which brings to us to the next part: This is again a two-bar groove with eighth notes being played on an open hi-hat and snare drum on 2 and 4 of each bar. The difficult part here is to get the bass drum right. I would recommend practicing it slowly first at a very slow tempo first and then gradually try to reach tempo of the song. There are two bars of this after which we get back to the groove played in the first verse of the song i.e. The next part uses a combination of two grooves. First one is a one bar groove which looks like this: The second one is a two-bar groove, the first bar looks like this: And the second bar looks like this: This same groove gets repeated in the next measure with a small difference in the second bar of the second group. The next measure uses yet another groove which looks like following: The next part of the song uses variations of the groove played after first verse and it looks like this: Next part uses the same groove as the first verse, here is how it looks like: The song ends with the following groove: I have posted a video of myself playing this song on my YouTube channel. Do subscribe to the channel as well as this blog for more drum tutorials.

  • Apache Spark Tutorial Scala: A Beginners Guide to Apache Spark Programming

    Learn Apache Spark: Tutorial for Beginners - This Apache Spark tutorial documentation will introduce you to Apache Spark programming in Scala. You will learn about Scala programming, dataframe, RDD, Spark SQL, and Spark Streaming with examples and finally prepare yourself for Spark interview questions and answers. What is Apache Spark? Apache Spark is an analytics engine for big data processing. It runs 100 times faster than Hadoop and gives you full freedom to process large-scale data in real time, run analytics and apply machine learning algorithms. Navigation menu ​ 1. Apache Spark and Scala Installation 1.1 Spark installation on Windows​ 1.2 Spark installation on Mac 2. Getting Familiar with Scala IDE 2.1 Hello World with Scala IDE​ 3. Spark data structure basics 3.1 Spark RDD Transformations and Actions example 4. Spark Shell 4.1 Starting Spark shell with SparkContext example​ 5. Reading data files in Spark 5.1 SparkContext Parallelize and read textFile method 5.2 Loading JSON file using Spark Scala 5.3 Loading TEXT file using Spark Scala 5.4 How to convert RDD to dataframe? 6. Writing data files in Spark ​6.1 How to write single CSV file in Spark 7. Spark streaming 7.1 Word count example Scala 7.2 Analyzing Twitter texts 8. Sample Big Data Architecture with Apache Spark 9. What's Artificial Intelligence, Machine Learning, Deep Learning, Predictive Analytics, Data Science? 10. Spark Interview Questions and Answers Next: Apache Spark Installation (Windows | Mac)

  • SQL Server 2014 Standard Download & Installation

    In this post, you will learn how to download and install SQL Server 2014 from an ISO image. I will also download the AdventureWorks sample database and load it into our SQL server 2014. Downloading ISO image 1. You can download the SQL Server from this link. But in case of the link changes, you can Google - "download SQL server enterprise evaluation" and it will show you the results below. Related: How to file B1/B2 visa for parents? 2. Open the link highlighted above and you will find all the latest SQL Servers available. I am installing SQL Server 2014 for this post. As its old version, it is very stable. 3. Click on the little plus sign and select the iso image to download. 4. Fill out these details and hit continue. 5. Select your platform based on your system properties whether it's 32-bit or 64-bit (Go to your control panel and look for system properties). The download will take approximately 10 minutes based on your network. It's a ~3 GB ISO file. Make sure you have enough space on your computer to perform this installation. Run ISO image 1. Once the download is complete, double-click on the ISO file and it will automatically mount on your drive as shown below. If it doesn't mount automatically you can use WinRAR to unzip the ISO file. 2. Open the drive to run the setup. SQL Server 2014 Installation 1. Setup will take some time (approx. a minute) to open. You will get this screen. 2. Go to the Installation tab as shown above and click on New SQL Server stand-alone installation. 3. Click next, you can select check for updates or leave it as default. 4. This step will take some time (approx. 5 mins) when it will install setup files. 5. Finally, you will get this screen. Click Next. 6. I have SQL Server Express edition running on my machine so we are seeing this option. We will just continue with the new installation default option. 7. Keep this edition as Evaluation and hit Next. 8. Check the box to accept license terms and hit Next. 9. Select SQL Server Feature Installation and hit Next. 10. Select all the features and hit Next as shown below. 11. Enter the name of the instance, I have named it "SQL2014" and hit Next. 12. Leave it as default and hit Next as shown below. 13. Just leave the default as Windows authentication for simplicity and add the current user. Windows will automatically add the current user when you click the Add current user button. Hit Next. Leave data directories and file streams as default for simplicity. Usually, in a production environment, we choose different drives for logs and data (like in case the server crashes). 14. Leave analysis services mode as default and add the current user. Hit Next. 15. Leave the default options as shown below. 16. Click add current user and hit Next. 17. Give Controller Name, I have named it "Dataneb" and hit Next. 18. Hit the Install button. 19. Installation will take some time (took 1 hour for me). So sit back and relax. Completion After 1-hour the installation was completed for me. This might take the same or less time for you depending upon your system configuration. Now, you can open the SQL Server configuration manager and see if SQL Server 2014 is running or not. I have SQL Server Express edition also running on this machine, ignore that. AdventureWorks Database I am loading the AdventureWorks database for example purposes here. 1. In order to load AdventureWorks, open SSMS (SQL Server Management Studio) and connect to SQL2014 which we just installed. 2. Now download a sample AdventureWorks 2014 database from the Microsoft site. You can go to this link, and you can choose anyone. I am downloading AdventureWorks 2014. 3. Once the download completes, move the AdventureWorks2014.bak file to the MSSQL server 2014 backup location. It will be C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\Backup 4. Now open SSMS > right click on Databases > Restore Database > select Device > click on three Dots > Add 5. Add the AdventureWorks2014 backup file and hit OK. 6. Once the backup is loaded, you will be able to see AdventureWorks2014 loaded under your database panel, if not right click and refresh. That's all! If you have any questions please mention them in the comments section below. Next: How to create an SSIS package in Visual Studio 2017

  • Create SSIS package in Visual Studio 2017

    In this tutorial, you will learn how to create an SSIS (SQL Server Integration Services) package in Visual Studio 2017 step by step. For this, you need to install SQL Server Data Tools (SSDT) on your machine. SSDT (Business Intelligence template) is used to create SSIS/SSRS/SSAS solutions. For Visual Studio 2018+ Visual Studio 2018 or higher is included with the SSDT BI template so you don't need to install SSDT separately. You just need to check the box "Data Storage and Processing" in the workload section while installing Visual Studio 2018 or later. Visual Studio 2017 Installation Download the older version of Visual Studio from this link. Scroll down on that link to see the older versions. You need to create a Microsoft account (if you don't have one) and have a free Visual Studio subscription. Below is the product description which you need to download. Related: How to file a B1 B2 visa for parents? Download the latest "Community" version (15.9 and not 15.0) to install SSDT BI as highlighted below. VS installation might take 20 to 45 mins depending on your system configuration. If you already have VS 2017 (version 15.0) then you have to upgrade VS 2017 to the latest version. You can go to the Visual Studio Installed version and check for available updates. SSDT BI Installation You can download SSDT (version 15.9.1) from this link. This link might change with new versions coming in the future, in that case, you can simply Google "SSDT release 15.9" and visit the Microsoft VS 2017 SSDT installation page. Related: How to file H4 EAD? Check all the services (SSAS/SSIS/SSRS) as shown below and select Visual Studio Community 2017 from the drop-down list. Click Install. The download and installation process will take around 30 minutes depending on your system configuration. Restart your computer once the installation is completed. Installation Check Once installation is done, open Visual Studio 2017 and go to the menu option File > New Project. Look up 🔎 Business Intelligence, if you can find Integration Services, Analysis Services, and Reporting Services on the left side of the panel, then the installation is fine. Create SSIS package To create the SSIS basic package, you need to Create a project for the package. Add a control flow and data flow to the package. Add components to the data flow. What is SSIS? SSIS is an ETL tool for data warehousing that comes with Microsoft SQL Server. There is no extra cost for SSIS services. It lets you set up automated data load or extract processes to and/or from your SQL Server. SSIS stands for SQL Server Integration Services and ETL stands for Extract-Transform-Load. It is comparable to other ETL tools like Informatica and IBM Datastage etc. What does it do? SSIS provides you platform referred to as SSDT to develop ETL solutions which could be the combination of one or more packages. Solutions are saved with the .sln extension and packages are XML files saved with the .dtsx extension. Packages are deployed in SQL Server MSDB database called SSISDB and managed in the Integration Services Catalog in SSMS (SQL Server Management Studio). Creating Package 1. Go to File > New > Project, and name your project. Click OK. It will open SSIS designer. On the left-hand panel, you will see the SSIS toolbox with all the tasks, and at the center, you will see various tabs to switch between control flow, data flow, parameters, event handler, and package explorer. On the right panel, you will see Solution Explorer where you can find the connection manager. Now, to create an SSIS package you need at least one control flow and a data flow task. A data flow task is simply a task that is used to Extract, Load, and Transform the data, and control flow is like the logical unit that controls the execution of tasks, like the flow in which tasks will execute. 2. Drag and drop the data flow task from the SSIS toolbox to the central panel (control flow tab) like the following: 3. You can double-click on Data Flow Task to rename it. I am keeping it as the default "Data Flow Task". Now right-click on Data Flow Task > Edit, or you can simply select the "Data Flow Task" and click on the "Data Flow" tab, it will open a screen where you can edit your "Data Flow Task". 4. Now drag and drop OLE DB Source, OLE DB Destination, and Data Conversion task from the SSIS toolbox to the designer space as shown below. 5. Select OLE DB Source and drag and drop the blue/green arrow to connect the Data Conversion task. Similarly, drag and drop/green blue arrow (not the red one) from the Data conversion task to OLE DB Destination. 6. Now, you need to create an OLE DB connection for the source and target. For this go to the Solution Explorer panel on the right-hand side > Connection Manager> New Connection Manager. Select OLE DB and click ADD. If you have already created an OLE DB data connection earlier on your machine it will show up here, otherwise, you can click on NEW and create a new one. Just enter your database name and test the connection. I assume you have the AdventureWorks database running on your machine if not please refer to this post. I have already installed SQL Server 2014 and SQL Express so you can see 2 instances of SQL Server service running on my machine. Now, for example, I have chosen the AdventureWorks 2014 database. 7. Now go back to the Data Flow screen and right-click on OLE DB Source task > Edit. Choose a sample table from the drop-down list, [Production].[Product]. Now go to the Columns tab, remove selected columns, and select these five columns - Name, ListPrice, Size, Weight, and SellStartDate and click OK. It's just for example purposes. 8. Now go to Data Flow and right-click on Data Conversion task > Edit. Select the SellStartDate column and change its data type from [DT_DBTIMESTAMP] to [DT_DBDATE], keep the alias name the same, and click OK. Just a minor datatype conversion to showcase this example. 9. Now, right-click on OLE DB destination editor > Edit > New SSIS by default creates the "create table" statement for you with input columns. CREATE TABLE [OLE DB Destination] ( [Name] nvarchar(50), [ListPrice] money, [Size] nvarchar(5), [Weight] numeric(8,2), [OLE DB Source.SellStartDate] datetime, [Data Conversion.SellStartDate] date ) Edit the table name and remove [OLE DB Source.SellStartDate] and hit OK, CREATE TABLE [OLE DB Destination_Products] ( [Name] nvarchar(50), [ListPrice] money, [Size] nvarchar(5), [Weight] numeric(8,2), [Data Conversion.SellStartDate] date ) Mappings should look like this, just click OK. 10. Now right-click on the blue/green arrow between the Data conversion task and the OLE DB Destination task and enable the data viewer. This is not a mandatory step but just to see the data preview after the data conversion. 11. Now hit the START button on top of your screen. This will start the package. You can see SellStartDate has only the date after conversion (no time field), all the tasks are green ticked which means they ran successfully and the number of rows is 1,008 processed. You can stop the flow or restart again from the buttons highlighted on top of the screen. That's it. This package creation example was showcased by Microsoft itself. I haven't modified anything to keep examples simple and informative. I hope you enjoyed the post. If you have any questions please mention them in the comments section below. Thank you. Next: SQL Server 2014 Download and Installation

  • How to clear Google cloud professional data engineer certification exam?

    In this blog you will learn - How to get Google cloud certification? How much it cost to get Google certified? Best Google certification courses available online right now. How to train yourself with Google cloud certification practice exams before actual examination. Before we begin I would like to mention one fact, you can crack this exam even if you don’t have "any work experience or prior knowledge" of GCP (Google Cloud Platform). I am writing this blog to showcase how you can clear Google cloud professional data engineer certification without any prior knowledge of GCP. I would start by dividing this whole preparation into 3 basic sections: Online video lectures (absolutely free if completed within a time frame) Glance through some Google documentation Finally, few practice tests Step 1. Online Video Lectures Coursera: First begin with Coursera course which is also suggested by google and it's really knowledgeable. You can use 7 day free trial of coursera to complete this specialization. But since this is very big course, you will have to devote good amount of time everyday for these 7 days. This course comes with Qwiklabs where you can do lab assessments without creating any GCP account. Also this course comes with quizzes so as to get good understanding of GCP components with hands-on experience as well. Udemy: Next comes Udemy, it's a combined course for both data engineers and architects. This course will help you to understand real world implementation of GCP components. You can skip machine learning part from this course if you want. These two courses are not very exam oriented but will give you good understanding of every GCP component with some basic hands on. Now jumping to exam oriented video lectures, Cloud Academy and Linux Academy comes to our rescue. Both of these sites comes with a 7 days free trial option. Cloud Academy will give you good knowledge of most of the topics covered in the exam. You can learn machine learning from this course. Try to understand well each and every point covered in this course. This course also comes with quizzes for main topics. Understand well the explanations given for the quizzes. However this Cloud Academy course doesn’t cover topics such as data preparation and this is where linux academy comes into the picture. Linux Academy course has covered all the topics of the exam in most exam oriented way. You will get good understanding of machine learning and other remaining topics. This course also has topic wise tests and a full 2 hour test (50 questions) to give you a feel of real test. However I would recommend you to give this test at the last stage of preparation and also attempt this test at least thrice and score 100%. For revision I would suggest you to go through Linux academy’s Data Dossier. This is the best part of complete course which you will require at the last moment. Step 2: Google Documentation There are few topics such as big-query, pub-sub, data-studio for which you will have to go through google docs. For data flow you need to go through apache beam documentation. Understand following points of each of the components very well: Access Control Best practices Limitations For ML, understand well the different use cases where pre-trained ML apis are used. This will help you understand whether to use pre-build apis or to make a custom model. Step 3: Practice Test For practice tests you can go through the following: Google DE practice test Whizlabs Test Linux academy practice test Make sure you give all the tests at least thrice and understand well, each question and their answers. For each of the question you should understand why a particular answer is correct and why the remaining ones are incorrect. At the end I would suggest that google has made this exam very logical where in you need to know the in and out of every topic very well to clear the exam. So understand everything well and don’t try to memorize or mug up everything. Best of luck!!

  • Calling 911 for Pepperoni Pizza Delivery, But Why?

    Phone Conversation of 911 Operator (reference Reddit user Crux1836); Officer : “911, where is your emergency?” Caller : “123 Main St.” Officer : “Ok, what’s going on there?” Caller : “I’d like to order a pizza for delivery.” Officer : “Ma’am, you’ve reached 911” Caller : “Yeah, I know. Can I have a large with half pepperoni, half mushroom and peppers?” Officer : “Ummm… I’m sorry, you know you’ve called 911 right?” Caller : “Yeah, do you know how long it will be?” Officer : “Ok, Ma’am, is everything ok over there? do you have an emergency?” Caller : “Yes, I do.” Officer : “… And you can’t talk about it because there’s someone in the room with you?” Caller : “Yes, that’s correct. Do you know how long it will be?” Officer : “I have an officer about a mile from your location. Are there any weapons in your house?” Caller : “Nope.” Officer : “Can you stay on the phone with me?” Caller : “Nope. See you soon, thanks” (Officer) As we dispatch the call, I check the history at the address, and see there are multiple previous domestic violence calls. The officer arrives and finds a couple, female was kind of banged up, and boyfriend was drunk. Officer arrests him after she explains that the boyfriend had been beating her for a while. I thought she was pretty clever to use that trick. Definitely one of the most memorable calls. Another case which happed in UK ; The call went something like this: Operator : Police Emergency Caller : Hello, I’d like to order a curry please. Operator : You’re through to the police Caller : Could you deliver it to ‘123 Street’ Operator : Madam, this is the police, not a delivery service Caller : Could you deliver it as soon as possible? Operator : (starting to realize something is fishy) “Madam, are you in a situation where you cannot talk freely? Caller : Yes. Operator : Are you in danger? Caller : Yes. Operator : Okay, I’m arranging help for you immediately. Caller : Could you make it two Naan Breads? My husband is really hungry. Operator : I’ll send two officers. This transcript is purely based on memory from a police officer’s memoir. On the police response, a very angry man was arrested for domestic violence. There was obviously the risk that the operator could have hung up on a ‘time-wasting’ caller, but once they realized something was wrong, they changed scripts immediately. Can you actually call Emergency Services and “order a pizza” as a tactic for help? The answer is "No", there is no such 911 pizza call "code". Police and 911 operators say there’s no such secret code, and that your best option if you’re afraid of someone in the room overhearing your call is to text 911 with your location and the type of emergency. However, a meme has been circulating on social media reads: “If you need to call 911 but are scared to because of someone in the room, dial and ask for a pepperoni pizza… Share this to save a life” Here is what LAPD tweets, Remember, if you can't call - you can TEXT ! Tags: #Funny #Lesson

  • Best Office Prank Ever, Don't Miss the End

    Have you ever seen chocolate thief in your office? This is the epic message chain when someone started to stealing chocolates from office refrigerator. #Funny #Office #Prank

bottom of page