Python read xml pandas

How to Read XML File with Python and Pandas

In this quick tutorial, we’ll cover how to read or convert XML file to Pandas DataFrame or Python data structure.

Since version 1.3 Pandas offers an elegant solution for reading XML files: pd.read_xml() .

With the single line above we can read XML file to Pandas DataFrame or Python structure.

Below we will cover multiple examples in greater detail by using two ways:

Setup

Suppose we have simple XML file with the following structure:

   https://example.com/item-1 2022-06-02T00:00:00Z weekly  https://example.com/item-2 2022-06-02T11:34:37Z weekly  https://example.com/item-3 2022-06-03T19:24:47Z weekly   

which we would like to read as Pandas DataFrame like shown below:

loc lastmod changefreq
0 https://example.com/item-1 2022-06-02T00:00:00Z weekly
1 https://example.com/item-2 2022-06-02T11:34:37Z weekly
2 https://example.com/item-3 2022-06-03T19:24:47Z weekly

or getting the links as Python list:

['https://example.com/item-1', 'https://example.com/item-2', 'https://example.com/item-3'] 

Step 1: Read XML File with read_xml()

The official documentation of method read_xml() is placed on this link:

To read the local XML file in Python we can give the absolute path of the file:

import pandas as pd df = pd.read_xml('sitemap.xml') 
loc lastmod changefreq
0 https://example.com/item-1 2022-06-02T00:00:00Z weekly
1 https://example.com/item-2 2022-06-02T11:34:37Z weekly
2 https://example.com/item-3 2022-06-03T19:24:47Z weekly

The method has several useful parameters:

  • xpath — The XPath to parse the required set of nodes for migration to DataFrame.
  • elems_only — Parse only the child elements at the specified xpath. By default, all child elements and non-empty text nodes are returned.
  • names — Column names for DataFrame of parsed XML data.
  • encoding — Encoding of XML document.
  • namespaces — The namespaces defined in XML document as dicts with key being namespace prefix and value the URI.

Step 2: Read XML File with read_xml() — remote

Now let’s use Pandas to read XML from a remote location.

The first parameter of read_xml() is: path_or_buffer described as:

String, path object (implementing os.PathLike[str]), or file-like object implementing a read() function. The string can be any valid XML string or a path. The string can further be a URL. Valid URL schemes include http, ftp, s3, and file.

So we can read remote files the same way:

import pandas as pd df = pd.read_xml( f'https://s3.example.com/sitemap.xml.gz') 

The final output will be exactly the same as before — DataFrame which has all values from the XML data.

Step 3: Read XML File as Python list or dict

Now suppose you need to convert XML file to Python list or dictionary.

We need to read the XML file first, then to convert the file to DataFrame and finally to get the values from this DataFrame by:

Example 1: List

['https://example.com/item-1', 'https://example.com/item-2', 'https://example.com/item-3'] 

Example 2: Dictionary

Example 3: Dictionary — orient index

df[['loc', 'changefreq']].to_dict(orient='index') 

Step 4: Read multiple XML Files in Python

Finally let’s see how to read multiple identical XML files with Python and Pandas.

Suppose that files are identical with the following format:

We can use the following code to read all files in a given range and concatenate them into a single DataFrame:

import pandas as pd df_temp = [] for i in (range(1, 10)): s = f'https://s3.example.com/sitemap.xml.gz' df_site = pd.read_xml(s) df_temp.append(df_site) 

The result is a list of DataFrames which can be concatenated into a single one by:

Now we have information from all XML files into df_all.

Step 5: Read XML File — xmltodict

There is an alternative solution for reading XML file in Python by using the library: xmltodict .

To read XML file we can do:

import xmltodict with open('sitemap.xml') as fd: doc = xmltodict.parse(fd.read()) 

Accessing elements can be done by:

Conclusion

In this article, we covered several ways to read XML file with Python and Pandas. Now we know how to read local or remote XML files, using two Python libraries.

Different options and parameters make the XML conversion with Python — easy and flexible.

By using DataScientYst — Data Science Simplified, you agree to our Cookie Policy.

Источник

pandas.read_xml#

pandas. read_xml ( path_or_buffer , * , xpath = ‘./*’ , namespaces = None , elems_only = False , attrs_only = False , names = None , dtype = None , converters = None , parse_dates = None , encoding = ‘utf-8’ , parser = ‘lxml’ , stylesheet = None , iterparse = None , compression = ‘infer’ , storage_options = None , dtype_backend = _NoDefault.no_default ) [source] #

Read XML document into a DataFrame object.

String, path object (implementing os.PathLike[str] ), or file-like object implementing a read() function. The string can be any valid XML string or a path. The string can further be a URL. Valid URL schemes include http, ftp, s3, and file.

xpath str, optional, default ‘./*’

The XPath to parse required set of nodes for migration to DataFrame. XPath should return a collection of elements and not a single element. Note: The etree parser supports limited XPath expressions. For more complex XPath, use lxml which requires installation.

namespaces dict, optional

The namespaces defined in XML document as dicts with key being namespace prefix and value the URI. There is no need to include all namespaces in XML, only the ones used in xpath expression. Note: if XML document uses default namespace denoted as xmlns=’’ without a prefix, you must assign any temporary namespace prefix such as ‘doc’ to the URI in order to parse underlying nodes and/or attributes. For example,

namespaces = "doc": "https://example.com"> 

Parse only the child elements at the specified xpath . By default, all child elements and non-empty text nodes are returned.

attrs_only bool, optional, default False

Parse only the attributes at the specified xpath . By default, all attributes are returned.

names list-like, optional

Column names for DataFrame of parsed XML data. Use this parameter to rename original element names and distinguish same named elements and attributes.

dtype Type name or dict of column -> type, optional

Data type for data or columns. E.g. Use str or object together with suitable na_values settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.

Dict of functions for converting values in certain columns. Keys can either be integers or column labels.

Identifiers to parse index or columns to datetime. The behavior is as follows:

  • boolean. If True -> try parsing the index.
  • list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.
  • list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.
  • dict, e.g. -> parse columns 1, 3 as date and call result ‘foo’

parser , default ‘lxml’

Parser module to use for retrieval of data. Only ‘lxml’ and ‘etree’ are supported. With ‘lxml’ more complex XPath searches and ability to use XSLT stylesheet are supported.

stylesheet str, path object or file-like object

A URL, file-like object, or a raw string containing an XSLT script. This stylesheet should flatten complex, deeply nested XML documents for easier parsing. To use this feature you must have lxml module installed and specify ‘lxml’ as parser . The xpath must reference nodes of transformed XML document generated after XSLT transformation and not the original XML document. Only XSLT 1.0 scripts and not later versions is currently supported.

iterparse dict, optional

The nodes or attributes to retrieve in iterparsing of XML document as a dict with key being the name of repeating element and value being list of elements or attribute names that are descendants of the repeated element. Note: If this option is used, it will replace xpath parsing and unlike xpath, descendants do not need to relate to each other but can exist any where in document under the repeating element. This memory- efficient method should be used for very large XML files (500MB, 1GB, or 5GB+). For example,

iterparse = "row_element": ["child_elem", "attr", "grandchild_elem"]> 

For on-the-fly decompression of on-disk data. If ‘infer’ and ‘path_or_buffer’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’ (otherwise no compression). If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key ‘method’ set to one of < 'zip' , 'gzip' , 'bz2' , 'zstd' , 'tar' >and other key-value pairs are forwarded to zipfile.ZipFile , gzip.GzipFile , bz2.BZ2File , zstandard.ZstdDecompressor or tarfile.TarFile , respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary: compression= .

New in version 1.5.0: Added support for .tar files.

Changed in version 1.4.0: Zstandard support.

Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib.request.Request as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec.open . Please see fsspec and urllib for more details, and for more examples on storage options refer here.

dtype_backend , defaults to NumPy backed DataFrames

Which dtype_backend to use, e.g. whether a DataFrame should have NumPy arrays, nullable dtypes are used for all dtypes that have a nullable implementation when “numpy_nullable” is set, pyarrow is used for all dtypes if “pyarrow” is set.

The dtype_backends are still experimential.

Convert a JSON string to pandas object.

Read HTML tables into a list of DataFrame objects.

This method is best designed to import shallow XML documents in following format which is the ideal fit for the two-dimensions of a DataFrame (row by column).

root> row> column1>datacolumn1> column2>datacolumn2> column3>datacolumn3> . row> row> . row> . root> 

As a file format, XML documents can be designed any way including layout of elements and attributes as long as it conforms to W3C specifications. Therefore, this method is a convenience handler for a specific flatter design and not all possible XML structures.

However, for more complex XML documents, stylesheet allows you to temporarily redesign original document with XSLT (a special purpose language) for a flatter version for migration to a DataFrame.

This function will always return a single DataFrame or raise exceptions due to issues with XML document, xpath , or other parameters.

See the read_xml documentation in the IO section of the docs for more information in using this method to parse XML files to DataFrames.

>>> xml = ''' .  . . square . 360 . 4.0 .  . . circle . 360 .  .  . . triangle . 180 . 3.0 .  . ''' 
>>> df = pd.read_xml(xml) >>> df shape degrees sides 0 square 360 4.0 1 circle 360 NaN 2 triangle 180 3.0 
>>> df = pd.read_xml(xml, xpath=".//row") >>> df shape degrees sides 0 square 360 4.0 1 circle 360 NaN 2 triangle 180 3.0 
>>> xml = ''' .  . . square . 360 . 4.0 .  . . circle . 360 .  .  . . triangle . 180 . 3.0 .  . ''' 
>>> df = pd.read_xml(xml, . xpath="//doc:row", . namespaces="doc": "https://example.com">) >>> df shape degrees sides 0 square 360 4.0 1 circle 360 NaN 2 triangle 180 3.0 

Источник

Читайте также:  What is local host in php
Оцените статью