Results found for empty search
- How to pull data from OKTA API example
OKTA has various rest APIs (refer this) from where you can pull the data and play around according to your business requirement. As OKTA stores only 90 days of records so in many cases you might need to store the data in external databases and then perform your data analysis. In order to pull the data from OKTA I considered writing a shell script, probably because this looked very straight forward to me. But there are other methods as well which you can consider if you have wide project timeline. Lets see how this can be done with a shell script. Step 1: Go through the API reference documents and filters which OKTA has provided online. It's seriously very well documented and that would help you in case you want to tweak this script. Step 2: Get API access token from OKTA admin and validate if token is working properly or not with Postman client. Refer this. Step 3: Once you have the API access token and basic understanding of API filters you will be able to tweak the script according to your need. Step 4: Below is the complete shell program and brief explanation what each step is doing. # Define your environment variables - organization, domain and api_token. These will be used to construct URL in further steps. # If you want you can hide your API token, probably by reading token from a parameter file instead hard coding it. # Start ORG=company_name DOM=okta API_TOKEN=********************* # Initialize variables with some default values. # Change your destination path wherever you want to write the data. # Val is basically the pagination limit and PAT/REP_PAT is basically the pattern and replace_pattern string which I used to format the JSON file in correct format. Date_range will be used to pull the data based on dates which user inputs. VAL=1000 DEST_FILE=/var/spark/data i=1 PAT= REP_PAT= DATE_RANGE=2014-02-01 # Choose the API for which you need the data (events, logs or users), you can modify the code if you want to export any other api data. echo "Enter the name of API - events, logs, users. " read GID # Enter the date range to pull data echo "Enter the date in format yyyy-mm-dd" read DATE_RANGE date_func() { echo "Enter the date in format yyyy-mm-dd" read DATE_RANGE } # Check if entered date is in correct format if [ ${#DATE_RANGE} -ne 10 ]; then echo "Invalid date!! Enter date again.."; date_func else echo "Valid date!" fi # Construct the URL based on all the variables defined earlier URL=htt ps://$ORG.$DOM.com/api/v1/$GID?limit=$VAL # Case to choose API name entered by user, 4 to 10 are empty routes if you want to add new APIs case $GID in events) echo "events API selected" rm -f /var/spark/data/events.json* URL=htt ps://$ORG.$DOM.com/api/v1/$GID?lastUpdated%20gt%20%22"$DATE_RANGE"T00:00:00.000Z%22\&$VAL PAT=}]},{\"eventId\": REP_PAT=}]}'\n'{\"eventId\": sleep 1;; logs) echo "logs API selected" rm -f /var/spark/data/logs.json* URL=htt ps://$ORG.$DOM.com/api/v1/$GID?lastUpdated%20gt%20%22"$DATE_RANGE"T00:00:00.000Z%22\&$VAL PAT=}]},{\"actor\": REP_PAT=}]}'\n'{\"actor\": sleep 1;; users) echo "users API selected" PAT=}}},{\"id\": REP_PAT=}}}'\n'{\"id\": rm -f /var/spark/data/users.json* URL=htt ps://$ORG.$DOM.com/api/v1/$GID?filter=status%20eq%20%22STAGED%22%20or%20status%20eq%20%22PROVISIONED%22%20or%20status%20eq%20%22ACTIVE%22%20or%20status%20eq%20%22RECOVERY%22%20or%20status%20eq%20%22PASSWORD_EXPIRED%22%20or%20status%20eq%20%22LOCKED_OUT%22%20or%20status%20eq%20%22DEPROVISIONED%22\&$VAL echo $URL sleep 1;; 4) echo "four" ;; 5) echo "five" ;; 6) echo "six" ;; 7) echo "seven" ;; 8) echo "eight" ;; 9) echo "nine" ;; 10) echo "ten" ;; *) echo "INVALID INPUT!" ;; esac # Deleting temporary files before running the script rm -f itemp.txt rm -f temp.txt rm -f temp1.txt # Creating NEXT variable to handle pagination curl -i -X GET -H "Accept: application/json" -H "Content-Type: application/json" -H "Authorization: SSWS $API_TOKEN" "$URL" > itemp.txt NEXT=`grep -i 'rel="next"' itemp.txt | awk -F"<" '{print$2}' | awk -F">" '{print$1}'` tail -1 itemp.txt > temp.txt # Validating if URL is correctly defined echo $URL # Iterating the loop of pagination with NEXT variable until it's null while [ ${#NEXT} -ne 0 ] do echo "this command is executed till NEXT is null, current value of NEXT is $NEXT" curl -i -X GET -H "Accept: application/json" -H "Content-Type: application/json" -H "Authorization: SSWS $API_TOKEN" "$NEXT" > itemp.txt tail -1 itemp.txt >> temp.txt NEXT=`grep -i 'rel="next"' itemp.txt | awk -F"<" '{print$2}' | awk -F">" '{print$1}'` echo "number of loop = $i, for NEXT reference : $NEXT" (( i++ )) cat temp.txt | cut -c 2- | rev | cut -c 2- | rev > temp1.txt rm -f temp.txt # Formatting the output to create single line JSON records echo "PATTERN = $PAT" echo "REP_PATTERN = $REP_PAT" sed -i "s/$PAT/$REP_PAT/g" temp1.txt mv temp1.txt /var/spark/data/$GID.json_`date +"%Y%m%d_%H%M%S"` sleep 1 done # END See also - How to setup Postman client If you have any question please write in comments section below. Thank you!
- Apache Kafka and Zookeeper Installation & Sample Pub-Sub Model
There are many technologies available today which provides real time data ingestion (refer my previous blog). Apache Kafka is one of my favorites because of its distributed streaming platform. What exactly does that mean? Basically it can act as "publisher-subscriber" data streaming platform. It can process data streams as they occur. Lastly, it can store data stream in a fault-tolerant durable way. Kafka can run as a cluster on one or more servers that can span multiple data-centers. The Kafka cluster stores streams of records in categories called "topics". For instance, if I have data streams coming from Twitter (refer blog), you can name this topic as "Tweets". We will learn how to define these topics and how you can access pub-sub model. Kafka has basically four core APIs. Each record consists of a key, a value, and a timestamp. Producer API Consumer API Streams API & Connector API Whats the need of Zookeeper with Kafka? As each data stream in Kafka consist of a key, a value and a timestamp, we need someone to manage it's key-value pair and synchronicity. Zookeeper is essentially a centralized service for distributed systems to a hierarchical key-value store, which is used to provide configuration information, naming, providing synchronization service, and providing group services. Apache Kafka package installer comes with inbuilt Zookeeper but in production environment where we have multiple nodes, people usually install Zookeeper separately. I will explain you both the ways to run Kafka: one with inbuilt Zookeeper and another with separately installed Zookeeper. Kafka Installation Before we start Kafka installation. I hope you all have Java installed on your machine, if not please refer to my previous blog. Once you have successfully installed Java, go to the this link and download latest Kafka release. I have downloaded Scala 2.11 - Kafka_2.11-1.1.0.tgz (asc, sha512) to explain this blog. Once you download it on your local machine, move it to your Linux environment where you want to run Kafka. I use MobaXterm (open source tool) to transfer the file from my windows machine to Linux environment (Red Hat Linux client without GUI in this case). Navigate to the directory where you transferred .tgz file and untar the file. For instance I have kept it in /var folder: Now remove the .tgz file, not a necessary step but in order to save some space. cd /var rm /kafka_2.11-1.1.0.tgz Rename the folder for your convenience. mv kafka_2.11-1.1.0 kafka Zookeeper Installation Follow similar steps to download Zookeeper latest release from this link. Move the .gz file to your Linux machine (/var folder in my case) and perform the below steps: Run the below commands to untar the file and configure the .conf file. tar -xvf zookeeper-3.4.11.tar rm zookeeper-3.4.11.tar mv zookeeper-3.4.11 zookeeper cd zookeeper Now let's setup the configuration file. cd conf cp zoo_sample.cfg zoo.cfg Your configuration file will look something like below. You can change dataDir, if you don't want to depend upon your /tmp directory. If server reboots due to some infrastructure issues you might lose /tmp data, hence people usually don't rely on this. Or you can simply leave it as it is. At this point you are done with Zookeeper setup. Now we will start the Zookeeper as it is needed for Kafka. There are basically 2 ways to do this. Start Zookeeper First, like I said Kafka comes with inbuilt Zookeeper program (you can find Zookeeper files under /kafka/bin directory). So either you can start the Zookeeper which comes with Kafka or you can run Zookeeper separately. For this you can navigate to your Kafka directory and run the below command. cd /var/kafka bin/zookeeper-server-start.sh config/zookeeper.properties Or you can start Zookeeper which you just installed by running below command in /var/zookeeper directory. bin/zkServer.sh start You will get output something like this. Start Kafka Go to your Kafka directory and execute the below command: cd /var/kafka bin/kafka-server-start.sh config/server.properties You will find lots of events being generated and screen getting stuck at one point. Keep this terminal open and open a new terminal to verify if Zookeeper and Kafka services are running fine. Type the jps command to check active java process status. QuorunPeerMain basically shows our Zookeeper process 12050 & 11021 is our Kafka process. Process id might vary for you. Once you close that terminal Kafka service will stop. Another way is to run these services in background with "nohup", like; nohup bin/zookeeper-server-start.sh config/zookeeper.properties nohup bin/kafka-server-start.sh config/server.properties Create Kafka Topic As discussed earlier, Kafka cluster stores streams of records in categories called "topics". Lets create a topic called "Tweets". To do this run the below command in your Kafka directory. cd /var/kafka bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic Tweets You will get a prompt saying - Created topic "Tweets". You can also see the list of topics by running below command. bin/kafka-topics.sh --list --zookeeper localhost:2181 Running Kafka Producer and Consumer Run the below command in order to start Producer API. You need to tell Kafka which topic you want to start and on which port. Check /config/server.properties in Kafka directory for details. For example I am running it for Tweets on port 9092: cd /var/kafka bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Tweets Now open a "new terminal" and lets run the Consumer API. Make sure you are entering the correct topic name. Port will be same where your Zookeeper is running. bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic Tweets --from-beginning Now go back to Producer API terminal and type anything, hit enter. Same message will be shown on your Consumer API like below; Producer Terminal Consumer Terminal Thank you!! If you have any question please write in comments section below. #DataIngestion #Kafka #Zookeeper #ApacheKafkaandZookeeperInstallation #kafkapubsubmodel Learn Apache Spark in 7 days, start today! Main menu: Spark Scala Tutorial 1. Apache Spark and Scala Installation 1.1 Spark installation on Windows 1.2 Spark installation on Mac 2. Getting Familiar with Scala IDE 2.1 Hello World with Scala IDE 3. Spark data structure basics 3.1 Spark RDD Transformations and Actions example 4. Spark Shell 4.1 Starting Spark shell with SparkContext example 5. Reading data files in Spark 5.1 SparkContext Parallelize and read textFile method 5.2 Loading JSON file using Spark Scala 5.3 Loading TEXT file using Spark Scala 5.4 How to convert RDD to dataframe? 6. Writing data files in Spark 6.1 How to write single CSV file in Spark 7. Spark streaming 7.1 Word count example Scala 7.2 Analyzing Twitter texts 8. Sample Big Data Architecture with Apache Spark 9. What's Artificial Intelligence, Machine Learning, Deep Learning, Predictive Analytics, Data Science? 10. Spark Interview Questions and Answers
- Mutable vs Immutable Objects in Python example
Everything in Python is an object and all objects in Python can be either mutable or immutable. Immutable objects: An object with a fixed value. Immutable objects include numbers, bool, strings and tuples. Such an object cannot be altered. A new object has to be created if a different value has to be stored. They play an important role in places where a constant hash value is needed, for example as a key in a dictionary. Consider an integer → 1 assigned to variable “a”, >>> a=1 >>> b=a >>> id(a) 4547990592 >>> id(b) 4547990592 >>> id(1) 4547990592 Notice id() for all of them them is same. id() basically returns an integer which corresponds to the object’s memory location. b → a → 1 → 4547990592 Now, let’s increment the value of “a” by 1 so that we have new integer object → 2. >>> a=a+1 >>> id(a) 4547990624 >>> id(b) 4547990592 >>> id(2) 4547990624 Notice how id() of variable “a” changed, “b” and “1” are still same. b → 1 → 4547990592 a → 2 → 4547990624 The location to which “a” was pointing has changed ( from 4547990592 → 4547990624). Object “1” was never modified. Immutable objects doesn’t allow modification after creation. If you create n different integer values, each will have a different constant hash value. Mutable objects can change their value but keep their id() like list, dictionary, set. >>> x=[1,2,3] >>> y=x >>> id(x) 4583258952 >>> id(y) 4583258952 y → x → [1,2,3] → 4583258952 Now, lets change the list >>> x.pop() >>> x[1,2] >>> id(x) 4583258952 >>> id(y) 4583258952 y → x → [1,2] → 4583258952 x and y are still pointing to the same memory location even after actual list is changed.
- How to clear Google cloud professional data engineer certification exam?
In this blog you will learn - How to get Google cloud certification? How much it cost to get Google certified? Best Google certification courses available online right now. How to train yourself with Google cloud certification practice exams before actual examination. Before we begin I would like to mention one fact, you can crack this exam even if you don’t have "any work experience or prior knowledge" of GCP (Google Cloud Platform). I am writing this blog to showcase how you can clear Google cloud professional data engineer certification without any prior knowledge of GCP. I would start by dividing this whole preparation into 3 basic sections: Online video lectures (absolutely free if completed within a time frame) Glance through some Google documentation Finally, few practice tests Step 1. Online Video Lectures Coursera: First begin with Coursera course which is also suggested by google and it's really knowledgeable. You can use 7 day free trial of coursera to complete this specialization. But since this is very big course, you will have to devote good amount of time everyday for these 7 days. This course comes with Qwiklabs where you can do lab assessments without creating any GCP account. Also this course comes with quizzes so as to get good understanding of GCP components with hands-on experience as well. Udemy: Next comes Udemy, it's a combined course for both data engineers and architects. This course will help you to understand real world implementation of GCP components. You can skip machine learning part from this course if you want. These two courses are not very exam oriented but will give you good understanding of every GCP component with some basic hands on. Now jumping to exam oriented video lectures, Cloud Academy and Linux Academy comes to our rescue. Both of these sites comes with a 7 days free trial option. Cloud Academy will give you good knowledge of most of the topics covered in the exam. You can learn machine learning from this course. Try to understand well each and every point covered in this course. This course also comes with quizzes for main topics. Understand well the explanations given for the quizzes. However this Cloud Academy course doesn’t cover topics such as data preparation and this is where linux academy comes into the picture. Linux Academy course has covered all the topics of the exam in most exam oriented way. You will get good understanding of machine learning and other remaining topics. This course also has topic wise tests and a full 2 hour test (50 questions) to give you a feel of real test. However I would recommend you to give this test at the last stage of preparation and also attempt this test at least thrice and score 100%. For revision I would suggest you to go through Linux academy’s Data Dossier. This is the best part of complete course which you will require at the last moment. Step 2: Google Documentation There are few topics such as big-query, pub-sub, data-studio for which you will have to go through google docs. For data flow you need to go through apache beam documentation. Understand following points of each of the components very well: Access Control Best practices Limitations For ML, understand well the different use cases where pre-trained ML apis are used. This will help you understand whether to use pre-build apis or to make a custom model. Step 3: Practice Test For practice tests you can go through the following: Google DE practice test Whizlabs Test Linux academy practice test Make sure you give all the tests at least thrice and understand well, each question and their answers. For each of the question you should understand why a particular answer is correct and why the remaining ones are incorrect. At the end I would suggest that google has made this exam very logical where in you need to know the in and out of every topic very well to clear the exam. So understand everything well and don’t try to memorize or mug up everything. Best of luck!!
- Adsense Alternatives for Small Websites
Did your Google Adsense application got rejected? If the answer is yes, you are at right place. Well it's not the end of world and no doubt, there are several other alternatives for Google Adsense for small websites (hosted on WIX, Wordpress, Blogger etc). But the biggest question is - Which one should you vouch for? This is my weekly report of Adsense account (for one month old website) other than dataneb.com. Still thinking.. Why your Google Adsense account got rejected? Simple it's because of poor content on website, probably low traffic, site un-availability during verification, some policy violation etc. It could be due to one or more following reasons. Not Enough Content Website Design & Navigation Missing About Us or Contact Page Low Traffic New Website Poor Quality Content Language Restriction Number of Posts Number of Words per Post Using Free Domain There could be X number of factors and even after spending several hours on research - you will never get the answer. So don't get upset, as I said it's not the end of world, there are several other alternatives to Google Adsense. However, there is no doubt that Google Adsense provides you the easiest and best monetization methods to get steady income from your blogs. So if you have Google Adsense account, utilize it wisely. My rejection reason (but finally got approved) - Thought this might help others. I was providing wrong URL. Yeah I know it's funny and.. a silly mistake. Make sure you are providing the correct website name while submitting Google Adsense request form. My initial two request got denied because I entered wrong website url. First time I entered: http://dataneb.com and second time I entered https://dataneb.com. Correct name was https://www.dataneb.com, yeah I understand it's silly mistake but this is what it is. You need to mention url correctly otherwise Google crawlers will never read your website. and the proud moment, Another reason which is very common but never mentioned is - Google Adsense bots. Yes thats true, Adsense does not look into each request and website content manually. Their advance bots perform the hard task. The problem is the content which is usually generated via a JavaScript/AJAX type of content retrieval, so, since most crawlers, including AdSense, does not execute JavaScript, the crawler never sees your website content, and therefore see's your site as having no content. You will often face this issue with websites like WIX. Well whatever is the reason I would suggest instead of wasting time and money on your existing traffic, you can move forward with other alternatives which has very easy approval process and it will generate similar amount of revenue. Google Adsense approval time? Usually it's within 48 hours but sometimes longer depending upon your quality of website. If you don't get approval in first couple of requests, trust me you are stuck in infinite loop of wait time. I was little lucky in this case, it took 24 hours for me to get the final approval after couple of issues. Maximum number of Adsense units you can place on each page? No restriction, before it was just 3. How to integrate Ads with your website? Just add html code provided by these systems to your html widget anywhere on the page wherever you want to show the Ads. Google Adsense Alternatives I am not going to list down top 5, 10 or top 20 and confuse you more. Instead I am recommending just 3 based on my personal experience, ease of approval and revenue, which helped me to grow my business. I have used them and I am still using them (apart from Google Adsense) for my other websites. So, lets meet our top 3 alternatives to Google Adsense Before these 3 alternatives I would suggest you to try Amazon Affiliates program if you don't have much traffic. However, Amazon does not pay you for clicks or impression, it does only when a sale happens. 1. Media.Net (BEST alternative after Adsense) Approval time - few hours No limitation on number of Ads units per page No hidden fees It's also known as Yahoo/Bing advertising and it provides you contextual ads. It's holding rank 2 in contextual Ads. No minimum traffic requirement Unlike Adsense where you have option to choose image ads, here you have just textual ads There is no limitation on number of Ads unit like Adsense Supports mobile Ads Further you can change the size, color and shapes of Ad unit according to your convenience Monthly payment via Paypal ($100 minimum) 2. Infolinks (It's good.. ) Approval time - few hours It's very simple to integrate with your website It's open to any publisher - small, medium or large scale No fees No minimum requirements for traffic and page views or visitors and no hidden commitments Best part is Infolinks doesn't require space on your blog, they simply convert keywords into advertisement links So when users hover their mouse on specific keywords it automatically shows advertisements It provides in-text advertising and pays you per clicks on ads & not per impression 3. Chitika (It's okay) Approval time - few minutes Language restriction - English only No minimum traffic requirement No limitation on number of Ads per page Payment via Paypal ($10 minimum) or by check ($50 minimum) It target Ads are based on visitors location, so if your posts are location specific this is recommended for you Limitations on custom size of the Ads Similar to Adsense Image quality - medium Conclusion If you have Google Adsense account, use it wisely. If not, move ahead with Media.net. I would suggest just use Media.net and don't over-crowd your good looking website with tons of various types of Ads. Thank you. If you have any question for me please comment below. Good luck!
- Calling 911 for Pepperoni Pizza Delivery, But Why?
Phone Conversation of 911 Operator (reference Reddit user Crux1836); Officer : “911, where is your emergency?” Caller : “123 Main St.” Officer : “Ok, what’s going on there?” Caller : “I’d like to order a pizza for delivery.” Officer : “Ma’am, you’ve reached 911” Caller : “Yeah, I know. Can I have a large with half pepperoni, half mushroom and peppers?” Officer : “Ummm… I’m sorry, you know you’ve called 911 right?” Caller : “Yeah, do you know how long it will be?” Officer : “Ok, Ma’am, is everything ok over there? do you have an emergency?” Caller : “Yes, I do.” Officer : “… And you can’t talk about it because there’s someone in the room with you?” Caller : “Yes, that’s correct. Do you know how long it will be?” Officer : “I have an officer about a mile from your location. Are there any weapons in your house?” Caller : “Nope.” Officer : “Can you stay on the phone with me?” Caller : “Nope. See you soon, thanks” (Officer) As we dispatch the call, I check the history at the address, and see there are multiple previous domestic violence calls. The officer arrives and finds a couple, female was kind of banged up, and boyfriend was drunk. Officer arrests him after she explains that the boyfriend had been beating her for a while. I thought she was pretty clever to use that trick. Definitely one of the most memorable calls. Another case which happed in UK ; The call went something like this: Operator : Police Emergency Caller : Hello, I’d like to order a curry please. Operator : You’re through to the police Caller : Could you deliver it to ‘123 Street’ Operator : Madam, this is the police, not a delivery service Caller : Could you deliver it as soon as possible? Operator : (starting to realize something is fishy) “Madam, are you in a situation where you cannot talk freely? Caller : Yes. Operator : Are you in danger? Caller : Yes. Operator : Okay, I’m arranging help for you immediately. Caller : Could you make it two Naan Breads? My husband is really hungry. Operator : I’ll send two officers. This transcript is purely based on memory from a police officer’s memoir. On the police response, a very angry man was arrested for domestic violence. There was obviously the risk that the operator could have hung up on a ‘time-wasting’ caller, but once they realized something was wrong, they changed scripts immediately. Can you actually call Emergency Services and “order a pizza” as a tactic for help? The answer is "No", there is no such 911 pizza call "code". Police and 911 operators say there’s no such secret code, and that your best option if you’re afraid of someone in the room overhearing your call is to text 911 with your location and the type of emergency. However, a meme has been circulating on social media reads: “If you need to call 911 but are scared to because of someone in the room, dial and ask for a pepperoni pizza… Share this to save a life” Here is what LAPD tweets, Remember, if you can't call - you can TEXT ! Tags: #Funny #Lesson
- Best Office Prank Ever, Don't Miss the End
Have you ever seen chocolate thief in your office? This is the epic message chain when someone started to stealing chocolates from office refrigerator. #Funny #Office #Prank
- Quick & Easy Punjabi Palak Paneer Recipe
My love for Palak Paneer never ends. Each time I cook Palak Paneer, I try a new variation and love to see how it turns out. Here I am going to share one of the versions which is easy and quick. Preparation time: 20 min, Serves: 3-4 Ingredients : Spinach : 1 bunch Onion : 1 medium size Tomato : 2 small Ginger : 1/2 inch or less Garlic : 3-4 cloves Cardamom : 2 (whole) Green Chili : 3 Coriander/Dhania Powder : 1 tsp Kitchen King masala powder : 1.5 tsp Cumin/Jeera seeds : 1/2 tsp Paneer/Cottage cheese cubes : 200 grams Milk : 1 cup Preparation Steps : Heat some oil in a pan. Once hot, add cardamom, diced onions, ginger, garlic and green chilies. Saute these for 4-5 minutes till onions are golden brown. Next add the tomatoes and salt. Saute till the tomatoes are soft . Next add roughly chopped spinach (baby spinach need not be chopped). Once the spinach is soft. Blend this mix to a smooth paste. In the same pan add 1 tsp of oil. Add Cumin/Jeera seeds and wait till they crackle. Add the spinach paste into the pan. Next add Dhania(coriander seeds) powder and kitchen king masala powder, salt to taste. Stir the mix for few minutes and add hot water to adjust the consistency of the gravy. Bring it to a boil. Add paneer/cottage cheese cubes and one cup milk. Let this simmer for 5-6 minutes. Serve hot! Hope you all enjoy this version of Palak Paneer :)
- Spaghetti with Potatoes (Indian Style)
This spaghetti with Indian spices brings back childhood memories. It is a quick, simple, tasty recipe which everyone can enjoy as a meal anytime of the day! Preparation time: 20 min, Serves: 3-4 Ingredients : Spaghetti Onions : 2 Green Chillies : 2 Ginger : 1/2 inch Potato : 1 (medium sized) Cilantro/Coriander Tomato Ketchup : 1 cup Shredded Cheese Turmeric Powder : 1/2 tsp Red Chilli Powder : 1/2 tsp Dhania Powder ( Coriander Seed Powder) : 1 tsp Salt to taste Preparation Steps : Cook spaghetti as per the instructions mentioned on the packet. While the spaghetti is cooking heat oil in a pan. Add finely chopped ginger, green chilies. Saute for a minute and add the chopped onions. Once the onions are translucent , add the potatoes (finely chopped into small cubes). Saute for 3-4 minutes. Add Turmeric powder, red chili powder, dhania powder. Mix well. Cover the pan and let the potatoes cook. Once the potatoes are cooked, add the cooked spaghetti, tomato ketchup, cilantro and salt. Garnish with cheese and cover the lid so it melts. Serve hot! Hope you all enjoy this recipe!
- Enable Root User on Mac
By default root user is disabled on Mac, you need to follow below steps in order to enable/disable or change password for root user on Mac. 1. From top left hand side, choose Apple menu > System Preferences, then click Users & Groups (or Accounts). 2. Click the lock icon , then enter an administrator name and password. 3. After you unlock the lock. Click Login Options, right next to home icon. 4. Now Click Join (or Edit), right next to Network Account Server. Now Click Open Directory Utility. 5. Click lock icon in the Directory Utility window, then enter an administrator name and password. 6. From the menu bar in Directory Utility: Choose Edit > Enable Root User, then enter the password that you want to use for the root user. You can enable/disable/change password for root user from here. 7. Now go to Terminal and switch user to root and test. Rajas-MacBook-Pro: Rajput$ su root Password: Thank you!! If you enjoyed this post, I’d be very grateful if you’d help it spread by emailing it to a friend, or sharing it on Google or Facebook. Refer the links below. Also click on "Subscribe" button on top right corner to stay updated with latest posts. Your opinion matters a lot please comment if you have any suggestion for me. #enable #root #user #Mac









