5910 Breckenridge Pkwy Suite B, Tampa, FL. 33610
(800) 272-0707

SkillSoft Explore Course

Aspire     Pythonista to Python Master     Python Master Track 3: Dynamic Data Handling with Python

Final Exam: Dynamic Data Handling with Python will test your knowledge and application of the topics presented throughout the Dynamic Data Handling with Python track of the Aspire Pythonista to Python Master Journey.



Objectives

Final Exam: Dynamic Data Handling with Python

  • add a foreign key constraint between tables
  • analyze the common and distinct values in two tables using various petl functions
  • automate operations using triggers
  • calculate aggregate statistics for a field in a table using the aggregate function
  • configure an httpx.Cookies instance to send a collection of properties to a remote server
  • create and invoke stored procedures
  • create and use a SQL primary key constraint with autoincrement
  • create a table and insert rows into it
  • create a table in SQLite and read it into petl using SQLAlchemy and SQLite3
  • create SQL indexes on tables
  • create tables using object relational mapping
  • define and submit a POST request containing JSON and binary data
  • download a set of files sequentially using HTTPX
  • drop a table, recreate it, and insert rows into it
  • execute alter operations to add constraints and indices to tables
  • explore different ways to use logical operators for querying data
  • get data from MS Excel and perform basic operations on the data
  • identify the different options available to stream large volumes of data in an HTTP response
  • identify when a redirect has taken place upon submitting an HTTP request
  • implement an httpx.Cookies instance to send a collection of properties to a remote server
  • implement check constraints
  • implement insert and delete operations
  • implement slicing, dicing, and merging operations on petl data tables
  • implement specialized types of joins such as anti joins and cross joins
  • implement split operations on data stored within petl tables
  • insert and edit columns and rows in petl data tables
  • insert and edit rows and columns in petl data tables
  • insert data into views
  • insert rows into a table
  • install petl and create a basic petl data table out of toy data
  • install SQLAlchemy and connect to MySQL
  • install SQLAlchemy and connect to MySQL
  • install the latest available version of HTTPX on your system
  • limit the amount of time your app spends waiting to get served a response to an HTTP request using timeouts
  • make use of the rowreduce() function to reduce rows and compute aggregate statistics
  • map fields in a petl table to transformation based on functions
  • perform insert and delete operations
  • perform joins based on overlapping intervals rather than absolute values
  • perform SQL like equi-joins on petl data tables
  • perform various grouping operations on the data in a table
  • perform various import and export operations on CSV, TSV, and TXT files.
  • perform various update and replace operations on petl data tables by defining functions to perform transformations
  • query data using object relational mapping
  • read in data from the serialized pickle and XML file formats
  • read JSON data, perform various operations, and export it to a persistent format
  • recall the types of exceptions that can be encountered when sending and processing requests with HTTPX
  • recognize the messages conveyed in the different status codes sent in an HTTP response
  • recognize the types of exceptions that can be encountered when sending and processing requests with HTTPX
  • recognize when a redirect has taken place upon submitting an HTTP request
  • retrieve information about a remote resource using a HEAD request
  • submit data to a remote server using a POST request with HTTPX
  • submit HTTP GET requests with one or more parameters using HTTPX
  • use an httpx.Cookies instance to send a collection of properties to a remote server
  • use slicing, dicing, and merging operations on petl data tables
  • use the aggregate function to calculate aggregate statistics for a field in a table
  • use the facet petl function to define filters on specific fields in a table
  • use the fetchmany() function to retrieve the output of a select query
  • use the HTTPX AsynClient to download a set of files asynchronously
  • use triggers to automate operations
  • work with transaction aborts