Input/output
Pickling
read_pickle (path[, compression]) |
Load pickled pandas object (or any object) from file. |
Flat file
read_table (filepath_or_buffer, pathlib.Path, …) |
Read general delimited file into DataFrame. |
read_csv (filepath_or_buffer, pathlib.Path, …) |
Read a comma-separated values (csv) file into DataFrame. |
read_fwf (filepath_or_buffer, pathlib.Path, …) |
Read a table of fixed-width formatted lines into DataFrame. |
read_msgpack (path_or_buf[, encoding, iterator]) |
(DEPRECATED) Load msgpack pandas object from the specified file path. |
Clipboard
read_clipboard ([sep]) |
Read text from clipboard and pass to read_csv. |
Excel
read_excel (io[, sheet_name, header, names, …]) |
Read an Excel file into a pandas DataFrame. |
ExcelFile.parse (self[, sheet_name, header, …]) |
Parse specified sheet(s) into a DataFrame |
ExcelWriter (path[, engine, date_format, …]) |
Class for writing DataFrame objects into excel sheets, default is to use xlwt for xls, openpyxl for xlsx. |
JSON
read_json ([path_or_buf, orient, typ, dtype, …]) |
Convert a JSON string to pandas object. |
json_normalize (data, List[Dict]], …) |
Normalize semi-structured JSON data into a flat table. |
build_table_schema (data[, index, …]) |
Create a Table schema from data . |
HTML
read_html (io[, match, flavor, header, …]) |
Read HTML tables into a list of DataFrame objects. |
HDFStore: PyTables (HDF5)
read_hdf (path_or_buf[, key, mode]) |
Read from the store, close it if we opened it. |
HDFStore.put (self, key, value[, format, append]) |
Store object in HDFStore |
HDFStore.append (self, key, value[, format, …]) |
Append to Table in file. |
HDFStore.get (self, key) |
Retrieve pandas object stored in file |
HDFStore.select (self, key[, where, start, …]) |
Retrieve pandas object stored in file, optionally based on where criteria |
HDFStore.info (self) |
Print detailed information on the store. |
HDFStore.keys (self) |
Return a (potentially unordered) list of the keys corresponding to the objects stored in the HDFStore. |
HDFStore.groups (self) |
return a list of all the top-level nodes (that are not themselves a pandas storage object) |
HDFStore.walk (self[, where]) |
Walk the pytables group hierarchy for pandas objects |
Feather
read_feather (path[, columns, use_threads]) |
Load a feather-format object from the file path. |
Parquet
read_parquet (path[, engine, columns]) |
Load a parquet object from the file path, returning a DataFrame. |
SAS
read_sas (filepath_or_buffer[, format, …]) |
Read SAS files stored as either XPORT or SAS7BDAT format files. |
SQL
read_sql_table (table_name, con[, schema, …]) |
Read SQL database table into a DataFrame. |
read_sql_query (sql, con[, index_col, …]) |
Read SQL query into a DataFrame. |
read_sql (sql, con[, index_col, …]) |
Read SQL query or database table into a DataFrame. |
Google BigQuery
read_gbq (query[, project_id, index_col, …]) |
Load data from Google BigQuery. |
STATA
read_stata (filepath_or_buffer[, …]) |
Read Stata file into DataFrame. |
StataReader.data (self, \*\*kwargs) |
(DEPRECATED) Read observations from Stata file, converting them into a dataframe |
StataReader.data_label |
Return data label of Stata file. |
StataReader.value_labels (self) |
Return a dict, associating each variable name a dict, associating each value its corresponding label. |
StataReader.variable_labels (self) |
Return variable labels as a dict, associating each variable name with corresponding label. |
StataWriter.write_file (self) |
© 2008–2012, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team
Licensed under the 3-clause BSD License.
https://pandas.pydata.org/pandas-docs/version/0.25.0/reference/io.html