Python - Assessments
-
Following paths are required while setting up PySpark environment.
- Spark_Home
- py4j
- None of the above
-
If the following code is run in python what would you result
Num = '5'*'5' Ans. TypeError
-
What is the relation between docker image and docker container?
- A container is a runnable copy of the image
- A container contains images
- A container is a template for creating images
- An image is a container
-
Which are the libraries in Python that support threads?
- thread
- threading
- _threading
- none of the above
-
What will be the output of the following Python code?
myDict = {1:'Food',2:'Clothing',3:'Shelter'} Print(myDict.get(4,5)) Ans. 5
-
Value of Meta Attributes and all other Non-Meta Attributes are treated differently in Orange, True or False.
Ans. True -
How can make a python script executable on Unix?
- The script file mode must be runnable, and the last line of the script should be #! followed by the path of the python interpreter.
- The script file mode must be executable, and the 2nd line of the script should be @ followed by the path of the python interpreter.
- The script file mode must be executable, and the 1st line of the script should be @ followed by the path of the python interpreter.
- The script file mode must be executable, and the 1st line of the script should be #! followed by the path of the python interpreter.
- The script file mode must be executable, and the last line of the script should be @ followed by the path of the python interpreter.
-
This port is used for default HTTP connection. Ans. 80
-
Python features:
- Statically typed, Interpreted, Extensible, Portability only, OOP and not procedure
- Platform dependent, statically typed, Interpreted, Extensible, Portability
- High-level programming language, Platform independent, Dynamically typed, Interpreted, Extensible
- High-level programming language, Platform independent, statically typed, Interpreted, Extensible
- Platform dependent, Dynamically typed, Interpreted, Extensible, Portability
-
In orange, wrapping is performed to retain the information about the
- Feature names
- Feature values
- Feature names and values
- None.
-
Which function will print all the rows from the data frame of the 4th column.
Ans. print(df.iloc[:, 3]) -
What is the value of colors[2]?
Colors = ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] Ans. Yellow
-
Multithreaded Python Server Program includes which of the following modules?
- Python-Server.py
- Python-ClientA.py
- Python-ClientB.py
- None of the above
-
Pandas support visualization of data using Matplotlib. Matplotlib contains which act as a container for constraint plots. Either it may contain one subplot or more subplots. Into Matplotlib is called ____ object.
- Figure, axis
- Chart, edges
- Figure, plot
- Subplot, plot
-
What is the output for the below simple snippet?
Import re str = " Accenture - High Performance Delivered" x = re.split("\s", str) print(x) Ans. ['Accenture', '-', 'High', 'Performance', 'Delivered']
-
What does CIA stand for?
- Confidentiality, Integrity, Availability
- Coding, Integration, Availability
- Confidentiality, Integration, Access
- Cryptography, Integrity, Availability
-
Kivy GUI, Library, Support, Following of a Thing System.
- android
- windows
- ios
- linux
Ans. ALL
-
The following parameter of the Spark context is used for Spark installation directory.
- Master
- Conf
- Gateway
- Spark home
-
From Flask, import Flask, What is the difference between Flask and flask?
- Both are the same
- flask is the framework while Flask is a Python class datatype
- flask is the framework while Flask is a Python class datatype
- None of the above
-
Which of the following is not a driver for Neo4j in Python?
- Neo4jRestClient
- py2neo
- neo4j-driver
- neo-py
-
What is the output of the following piece of code?
class Demo: def check(self): return " Demo's check " def display(self): print(self.check()) class Demo_Derived(Demo): def check(self): return " Derived's check " Demo().display() Demo_Derived().display() Ans. Demo's check Derived's check
-
Which library will be used to do network programming?
- Socket
- Network
- Netsocket
- None of the above
-
What are the new form elements introduced in HTML5?
- Datalist
- Keygen
- Output
- Div
-
Spark Context used to launch a JVM and create a Java Spark Context by default.
- Py4j
- Pyspark
- Numpy
-
Scrapy sends emails using the following method.
- By using a standard constructor
- By using scrapy settings object
-
What are the data instances in Orange?
- Vectors
- Vector access through index
- Vector access through feature name
-
The following parameters are the Spark context we use for initializing a new JVM.
- Master
- Conf
- Gateway
- Spark home
-
Pitchframer will not be used for web development.
- Django
- PyFrame
- Flash
- Pyramid
-
Pygame can handle following objects.
- Image formats
- Joysticks
- Cursors
- All of the above
-
We can configure the number of bits we want to use as a resolution by calling the following method of an mraa.Aio instance
- setADCResolution.
- setBit.
- setResolutionBits
- all of the above
-
Following model can be built for image processing using TensorFlow.
- CNN
- Inception
- QuocNet
-
Which of the following is an example of Stochastic Graph Generator?
- petersen = nx.petersen_graph()
- K_5 = nx.complete_graph(5)
- red = nx.random_lobster(100, 0.9, 0.9)
- lollipop = nx.lollipop_graph(10, 20)
-
from pyspark import SparkContext
sc = SparkContext("local", "Collect app") words = sc.parallelize([ "scala", "java", "hadoop", "spark", "spark vs hadoop", "pyspark" ]) coll = words.collect() print "Elements in RDD ->%s" % (coll)
What is the output?
- "Elements in RDD -> [ 'scala', 'hadoop', 'java', 'spark vs hadoop', 'spark', 'pyspark', ]"
- Number of elements in RDD ? 8
- Number of elements in RDD ? 6
- None of the above
-
Pandas consist of static and moving window.
- Linear regression
- Panel regression
- Linear and panel regression
- None of the above
-
What is the method to retrieve the list of all active threads.
- threads()
- enumerate()
- getThreads()
- getList()
-
import pandas as pd
df = pd.DataFrame() print(df)
The output will be?
- "Empty DataFrame Columns: [0, 0] Index: [0, 0]"
- "Empty DataFrame Columns: [0] Index: [0]"
- "Empty DataFrame Columns: [] Index: []"
- None of the Mentioned
- SSL and HTTPS, which one is more secure in terms of security.
Ans. HTTPS - HAAR Cascade in Python can be used for the following.
- Face detection
- Car detection
- Tree detection
- Which is not a logistic regression model in Python?
- Sklearn.SGDClassifier(loss=log)
- Sklearn.linear_model.LogisticRegression
- Sklearn.LogisticRegression
- import math
def sigmoid(x):
return 1/(1+math.exp(-x))
- from pyspark import SparkContext
- Adding all the elements -> 12
- Adding all the elements -> 13
- Adding all the elements -> 14
- None of the above
- All the version control tools.
- CVS
- SVN
- Mercurial
- Docker
- Classification Algorithm in Orange use following object.
- Learners
- Classifier
- Regression model
- None of the above
- corpus = ["Apple Orange Orange Apple", "Apple Banana Apple Banana", "Banana Apple Banana Banana Banana Apple", "Banana Orange Banana Banana Orange Banana", "Banana Apple Banana Banana Orange Banana"] from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() vectorizer fit(corpus) corpus_vec = vectorizer transform(corpus).toarray() print(corpus_vec) What will be the output of the script?
- matrix with values
- vector with values
- null vector
- scalar
- Which of the following distance measures do we use in case of a categorical variable in K-NN?
- sklearn.metrics.hamming_loss
- sklearn.metrics.pairwise.euclidean_distances
- sklearn.metrics.pairwise.manhattan_distances
- Variables are used for aggregating the information through associative and commutative operations.
- Broadcast
- Accumulator
- In PySpark, Storage Level Decide Following.
- RDD should be stored in memory
- RDD should be stored on disk
- Whether to serialize RDD and whether to replicate RDD partitions.
- from pyspark import SparkContext
- 8
- 6
- Number of elements in RDD ? 8
- Number of elements in RDD ? 6
- Which of the following is a classification algorithm used in Orange?
- Orange.classification Logistic RegressionLearner
- Orange.classification.knn.KNNLearner
- Orange.classification.Random ForestLearner
- What is the output of the below code?
- List the content of the python directory
- List the content of the operating system directory
- List the content of the operating system dictionary from where python invokes
- None of the above
sc = SparkContext("local", "Reduce app") nums = sc.parallelize([1, 2, 3, 4, 5]) adding = nums.reduce(add) print "Adding all the elements -> %i" % (adding)
What is the output?
sc = SparkContext("local", "count app") words = sc.parallelize([ "scala", "java", "Hadoop", "spark", "spark vs Hadoop", "pyspark" ]) counts = words.count() print "Number of elements in RDD -> %i" % (counts)
What is the output?
>>> import os >>> os.listdir()
Comments
Post a Comment