You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#After review, this seems to be an issue with PyArrow not Pandas
Specifically the convert_columns in from_pandas function of PyArrow as it is not able to work with the UUID library's UUID datatype.
A temporary workaround is to use the bytes data type before saving to parquet
importuuidimportpandasaspddf=pd.DataFrame({'id': [uuid.uuid4(), uuid.uuid4(), uuid.uuid4()]})
# Convert UUIDs to bytes for Arrow compatibilitydf['id'] =df['id'].apply(lambdax: x.bytes)
df.to_parquet('sample_pandas_pa.parquet', engine='pyarrow')
The above code produces no error and saves the file successfully
Of course we can implement it on our side in to_parquet function but seems like a fix is needed on their side. Can we get a comment from the Maintainers to see how to proceed?
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Writing UUIDs fail. pyarrow supports writing UUIDs
Expected Behavior
Writing UUIDs pass
Installed Versions
INSTALLED VERSIONS
commit : 0691c5c
python : 3.12.9
python-bits : 64
OS : Linux
OS-release : 6.8.0-57-generic
Version : #59~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Mar 19 17:07:41 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: