pandas.read_gbq¶
-
pandas.
read_gbq
(query, project_id=None, index_col=None, col_order=None, reauth=False, verbose=True, private_key=None, dialect='legacy', **kwargs)[source]¶ Load data from Google BigQuery.
The main method a user calls to execute a Query in Google BigQuery and read results into a pandas DataFrame.
Google BigQuery API Client Library v2 for Python is used. Documentation is available here
Authentication to the Google BigQuery service is via OAuth 2.0.
If “private_key” is not provided:
By default “application default credentials” are used.
If default application credentials are not found or are restrictive, user account credentials are used. In this case, you will be asked to grant permissions for product name ‘pandas GBQ’.
If “private_key” is provided:
Service account credentials will be used to authenticate.
Parameters: query : str
SQL-Like Query to return data values
project_id : str
Google BigQuery Account project ID.
index_col : str (optional)
Name of result column to use for index in results DataFrame
col_order : list(str) (optional)
List of BigQuery column names in the desired order for results DataFrame
reauth : boolean (default False)
Force Google BigQuery to reauthenticate the user. This is useful if multiple accounts are used.
verbose : boolean (default True)
Verbose output
private_key : str (optional)
Service account private key in JSON format. Can be file path or string contents. This is useful for remote server authentication (eg. jupyter iPython notebook on remote host)
dialect : {‘legacy’, ‘standard’}, default ‘legacy’
‘legacy’ : Use BigQuery’s legacy SQL dialect. ‘standard’ : Use BigQuery’s standard SQL (beta), which is compliant with the SQL 2011 standard. For more information see BigQuery SQL Reference
**kwargs : Arbitrary keyword arguments
configuration (dict): query config parameters for job processing. For example:
configuration = {‘query’: {‘useQueryCache’: False}}
For more information see BigQuery SQL Reference
Returns: df: DataFrame
DataFrame representing results of query