Skip to content

Moving to PyArrow dtypes by default #61618

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
datapythonista opened this issue Jun 9, 2025 · 27 comments
Open

Moving to PyArrow dtypes by default #61618

datapythonista opened this issue Jun 9, 2025 · 27 comments
Labels
Arrow pyarrow functionality Needs Discussion Requires discussion from core team before further action

Comments

@datapythonista
Copy link
Member

There have been some discussions about this before, but as far as I know there is not a plan or any decision made.

My understanding is (feel free to disagree):

  1. If we were starting pandas today, we would only use Arrow as storage for DataFrame columns / Series
  2. After all the great work that has been done on building new pandas types based on PyArrow, we are not considering other Arrow implementations
  3. Based on 1 and 2, we are moving towards pandas based on PyArrow, and the main question is what's the transition path

@jbrockmendel commented this, and I think many others share this point of view, based on past interactions:

There's a path to making it feasible to use PyArrow types by default, but that path probably takes multiple major release cycles. Doing it for 3.0 would be, frankly, insane.

It would be interesting to know why exactly, I guess it's mainly because of two main reasons:

  • Finish PyArrow types and making operations with them as reliable and fast as the original pandas types
  • Giving users time to adapt

I don't know the exact state of the PyArrow types, and how often users will face problems if using them instead of the original ones. From my perception, there aren't any major efforts to make them better at this point. So, I'm unsure if the situation in that regard will be very different if we make the PyArrow types the default ones tomorrow, or if we make them the default ones in two years.

My understanding is that the only person who is paid consistently to work on pandas is Matt, and he's doing an amazing job at keeping the project going, reviewing most of the PRs, keeping the CI in good shape... But I don't think not him not anyone else if being able to put hours into developing new things as it used to be. For reference, this is the GitHub chart of pandas activity (commits) since pandas 2.0:

Image

So, in my opinion, the existing problems with PyArrow will start to be addressed significantly, whenever they become the default ones.

So, in my opinion our two main options are:

  • We move forward with the PyArrow transition. pandas 3.0 will surely not be the best pandas version ever if we start using PyArrow types, but pandas 3.1 will be much better, and pandas 3.2 may be as good as pandas 2 in reliability and speed, but much closer to what we would like pandas to be.

Of course not all users are ready for pandas 3.0 with Arrow types. They can surely pin to pandas=2 until pandas 3 is more mature and they made the required changes to their code. We can surely add a flag pandas.options.mode.use_arrow = False that reverts the new default to the old status quo. So users can actually move to pandas 3.0 but stay with the old types until we (pandas) and them are ready to get into the new default types. The transition from Python 2 to 3 (which is surely an example of what not to do) took more than 10 years. I don't think in our case we need as much. And if there is interest (aka money) we can also support the pandas 2 series while needed.

  • The other option is to continue with the transition to the new nullable types, that my understanding is that we implemented because PyArrow didn't exist at that time. Continue to put our little resources on them. Making users adapt their code to a new temporary status quo, not the final one we envision, and stay in this transition period and delay the move to PyArrow I assume around 6 years (Brock mentioned multiple major release cycles, so I assume something like 3 at a rate of one major release every 2 years).

It will be great to know what are other people's thoughts and ideal plans, and see what makes more sense. But to me personally, based on the above information, it doesn't sound more insane to move to PyArrow in pandas 3, than to move in pandas 6.

@datapythonista datapythonista added the Needs Discussion Requires discussion from core team before further action label Jun 9, 2025
@jbrockmendel
Copy link
Member

Moving users to pyarrow types by default in 3.0 would be insane because #32265 has not been addressed. Getting that out of the way was the point of PDEP 16, which you and Simon are oddly hostile toward.

@mroeschke mroeschke added the Arrow pyarrow functionality label Jun 9, 2025
@mroeschke
Copy link
Member

Since I helped draft PDEP-10, I would like a world where the Arrow type system with NA semantics would be the only pandas type system.

Secondarily, I would like a world where pandas has the Arrow type system with NA semantics and the legacy (NaN) Numpy type system which is completely independent from the Arrow type system (e.g. a user cannot mix the two in any way)


I agree with Brock that null semantics (NA vs NaN) must inevitably be discussed with adopting a new type system.

I've also generally been concerned about the growing complexity of PDEP-14 with the configurability of "storage" and NA semantics (like having to define a comparison hierarchy #60639). While I understand that we've been very cautious about compatibility with existing types, I don't think this is maintainable or clearer for users in the long run.


My ideal roadplan would be:

  1. In pandas 3.x
    • Deprecate the NumPy nullable types
    • Deprecate NaN as a missing value for Arrow types
    • Deprecate mixing Arrow types and Numpy types in any operation
    • Add some global configuration API, set_option("type_system", "legacy" | "pyarrow"), that configures the "default" type system as either NaN semantics with NumPy or NA semantics with Arrow (with the default being "legacy")
  2. In pandas 4.x
    • Enforce above deprecations
    • Deprecate "legacy" as the "default" type system
  3. In pandas 5.x
    • Enforce "pyarrow" as the new "default" type system

@jbrockmendel
Copy link
Member

Deprecate NaN as a missing value for Arrow types

So with this deprecation enforced, NaN in a constructor, setitem, or csv would be treated as distinct from pd.NA? If so, I’m on board but I expect that to be a painful deprecation.

@datapythonista
Copy link
Member Author

To be clear, I'm not hostile towards PDEP-16 at all. I think it's important for pandas to have clear and simple missing value handling, and while incomplete, I think the PDEP and the discussions have been very useful and insightful. Amd I really appreciate that work.

I just don't see PDEP-16 as a blocker for moving to pyarrow, even less if implemented at the same time. And also, I wouldn't spend time with our own nullable dtypes, I would implement PDEP-16 only for pyarrow types.

I couldn't agree more on the points Matt mentions for pandas 3.x. Personally I would change the default earlier. Sounds like pandas 4.x is mostly about showing a warning to users until they manually change the default, which I personally wouldn't do. But that's a minor point, I like the general idea.

@mroeschke
Copy link
Member

mroeschke commented Jun 10, 2025

Deprecate NaN as a missing value for Arrow types

So with this deprecation enforced, NaN in a constructor, setitem, or csv would be treated as distinct from pd.NA?

Correct. Yeah I am optimistic that most of the deprecation would hopefully go into ArrowExtensionArray, but of course there are probably a lot of one-off places that need addressing.

I just don't see PDEP-16 as a blocker for moving to pyarrow

There is a lot of references about type systems in that PDEP that I think would warrant some re-imagining given which type systems are favored. As mentioned before (unfortunately) I think type systems and missing value semantics need to be discussed together

@datapythonista
Copy link
Member Author

I created a separate issue #61620 for the option mentioned in the description and in Matt's roadmap, since I think that's somehow independent, and no blocked by PDEP-15, by this issue, or by nothing else that I know.

There is a lot of references about type systems in that PDEP that I think would warrant some re-imagining given which type systems are favored. As mentioned before (unfortunately) I think type systems and missing value semantics need to be discussed together

I fully agree with this. But I'm not sure I fully understand why PDEP-16 must be a blocker for defaulting to PyArrow types.

For users already using PyArrow, they'll have to follow the deprecation transition if they are using NaN as missing. To me, this can be started in 3.0, 4.0 or whenever. And probably the earlier the better, so no more code is written with the undesired behavior.

For users not yet using PyArrow, I do understand that it's better to force the move when the PyArrow dtypes behave as we think they should behave. I'm not convinced this should be a blocker, and even less if the deprecation of the special treatment of NaN is also implemented in 3.0 or in the version when PyArrow types become the default. Maybe I'm wrong, but if you are getting data from parquet, csv... you are not immediately affected by this missing value semantics problems. You need to create data manually (rare for most professional use cases imho), or you need to be explicitly setting NaN in existing data (also rare in my personal experience). Am I missing something that makes this point important enough to be a blocker for moving to PyArrow and stop investing time in types that we plan to deprecate? If you have an example of code commonly used that is problematic for what I'm proposing, that can surely convince me, and help identify the optimal transition path.

@simonjayhawkins
Copy link
Member

Correct. Yeah I am optimistic that most of the deprecation would hopefully go into ArrowExtensionArray, but of course there are probably a lot of one-off places that need addressing.

@mroeschke a couple of questions:

  1. could the ArrowExtensionArray be described as a true extension array, i.e. using the extension array interface for 3rd party EAs.?

  2. does the ArrowExtensionArray rely on any Cython code for the implementation to work?

@simonjayhawkins
Copy link
Member

Deprecate NaN as a missing value for Arrow types

So with this deprecation enforced, NaN in a constructor, setitem, or csv would be treated as distinct from pd.NA? If so, I’m on board but I expect that to be a painful deprecation.

I have always understood the ArrowExtensionArray and ArrowDtype to be experimental, no PDEP and no roadmap item, for the purpose of evaluating PyArrow types in pandas to potentially eventually use as a backend for pandas nullable dtypes.

So I can sort of understand why the ArrowDtypes are no longer pure and have allowed pandas semantics to creep into the API.

As experimental dtypes why do they need any deprecation at all? Where do we promote these types as pandas recommended types?

@simonjayhawkins
Copy link
Member

just to be clear about my previous statement, there is a roadmap item

Apache Arrow interoperability
Apache Arrow is a cross-language development platform for in-memory data. The Arrow logical types are closely aligned with typical pandas use cases.

We'd like to provide better-integrated support for Arrow memory and data types within pandas. This will let us take advantage of its I/O capabilities and provide for better interoperability with other languages and libraries using Arrow.

but I've never interpreted this to cover adopting the current ArrowDtype system throughout pandas

@simonjayhawkins
Copy link
Member

My ideal roadplan would be:

@mroeschke given the current level of funding and interest for the development of the pandas nullable dtypes and the proportion of core devs that now appear to favor embracing the ArrowDtype instead, I fear that this may be, at this time, the most pragmatic approach in keeping pandas development moving forward as it does seem to have slowed of late. I'm not necessarily comfortable deprecating so much prior effort but then the same could have been said about Panel many years ago and I'm not sure anyone misses it today. If the community wants nullable dtypes by default, they may be less interested in the implementation details or even to some extent the performance. If the situation changes and there is more funding and contributions in the future and we have released a Windows 8 in the meantime then we could perhaps bring back the pandas nullable types.

@WillAyd
Copy link
Member

WillAyd commented Jun 10, 2025

The goal of this and PDEP-13 were pretty aligned; prefer PyArrow to build out our default type system where applicable, and fill in the gaps using whatever we have as a fallback. That PDEP conversation stalled; not sure if its worth reviving or if this issue is going to tackle a smaller subset of the problem, but in any case I definitely support this

@mroeschke
Copy link
Member

mroeschke commented Jun 10, 2025

But I'm not sure I fully understand why PDEP-16 must be a blocker for defaulting to PyArrow types.

@datapythonista I just would like some agreement that that defaulting to PyArrow types also matches PDEP-16's proposal to (only) NA semantics for this type as well when making the change for a consistent story, but I suppose they don't need to be done at the same time

could the ArrowExtensionArray be described as a true extension array, i.e. using the extension array interface for 3rd party EAs.?

@simonjayhawkins yes, it purely uses ExtensionArray and ExtensionDtype to implement functionality.

does the ArrowExtensionArray rely on any Cython code for the implementation to work?

It does not, and ideally it won't. When interacting with other parts of Cython in pandas, e.g. groupby, we've created hooks to convert to numpy first.

As experimental dtypes why do they need any deprecation at all? Where do we promote these types as pandas recommended types?

This is a valid point; technically it shouldn't require deprecation.

While our docs don't necessarily state a recommended type, anecdotally, it has felt like in past year or two there's been quite a number of conference talks, blogs posts, books that have "celebrated" the newer Arrow types in pandas. Although attention != usage, it may still warrant some care if changing behavior IMO.

If the situation changes and there is more funding and contributions in the future and we have released a Windows 8 in the meantime then we could perhaps bring back the pandas nullable types.

An alternative to completely deprecating and removing the pandas NumPy nullable types is to spin them off into their own repository & package and treat them like any other 3rd party ExtensionArray library for users that still want to use them.

@simonjayhawkins
Copy link
Member

If the situation changes and there is more funding and contributions in the future and we have released a Windows 8 in the meantime then we could perhaps bring back the pandas nullable types.

An alternative to completely deprecating and removing the pandas NumPy nullable types is to spin them off into their own repository & package and treat them like any other 3rd party ExtensionArray library for users that still want to use them.

we have a roadmap item:

Extensibility
Pandas extending.extension-types allow for extending NumPy types with custom data types and array storage. Pandas uses extension types internally, and provides an interface for 3rd-party libraries to define their own custom data types.

Many parts of pandas still unintentionally convert data to a NumPy array. These problems are especially pronounced for nested data.

We'd like to improve the handling of extension arrays throughout the library, making their behavior more consistent with the handling of NumPy arrays. We'll do this by cleaning up pandas' internals and adding new methods to the extension array interface.

the roadmap also states

pandas is in the process of moving roadmap points to PDEPs

so we could perhaps have a separate issue on this (perhaps culminating in a PDEP to clear this off that list) to address any points to make the spin off easier?

@simonjayhawkins
Copy link
Member

But I'm not sure I fully understand why PDEP-16 must be a blocker for defaulting to PyArrow types.

@datapythonista I just would like some agreement that that defaulting to PyArrow types also matches PDEP-16's proposal to (only) NA semantics for this type as well when making the change for a consistent story, but I suppose they don't need to be done at the same time

In the absence of a competing proposal, I think you should proceed assuming that PDEP-16 is going to be accepted someday. A stale PDEP cannot be allowed to hold up development in other areas, especially if you have the bandwidth and resources to move forward on this.

@simonjayhawkins
Copy link
Member

@mroeschke

  • Deprecate the NumPy nullable types

just to be clear, Int, Float, Boolean and pd.NA variants of StringArray only?

Deprecate mixing Arrow types and Numpy types in any operation

does that include numpy object type?

@Dr-Irv
Copy link
Contributor

Dr-Irv commented Jun 11, 2025

I'm going to throw out another (possibly crazy) idea based on these comments:

If we were starting pandas today, we would only use Arrow as storage for DataFrame columns / Series

An alternative to completely deprecating and removing the pandas NumPy nullable types is to spin them off into their own repository & package and treat them like any other 3rd party ExtensionArray library for users that still want to use them.

The idea is as follows:

  1. We create a new repository corresponding to what is in pandas 2.3 - let's call it npandas ("n" indicating "numpy-based")
  2. pandas 3.0 goes fully PyArrow - removes the numpy storage and removes the nullable extension types

We only do bug fixes to npandas - so if people want to use the numpy types or nullable types, they use that version.

If they want "modern" pandas, they fully buy into pandas 3.0 and PyArrow support - requiring pyarrow and start working with the future.

Then we don't have to worry about a transition path that WE maintain. The transition is up to the users. They can take working code with npandas 2.3 and try pandas 3.0, and report issues in converting from npandas 2.3 to pandas 3.0. But we don't have to keep all these code paths working - we have 2 separate projects.

@WillAyd
Copy link
Member

WillAyd commented Jun 11, 2025

Generally curious but why do we feel the need to remove the extension types? If anything, couldn't we just update them to use PyArrow behind the scenes?

I realize we use the term "numpy_nullable" throughout, but conceptually there's nothing different between what they and their corresponding Arrow types do. The Arrow implementation is just going to be more lightweight, so why not update those in place and not require users to change their code?

@datapythonista
Copy link
Member Author

All proposals and discussion points so far seem very interesting. Personally I think it's great we are having this conversation.

Based on the feedback above, what makes most sense to me is:

  • Require PyArrow in 3.0
  • We deprecate the NaN behavior in Arrow types. Correct me if I'm wrong, but I think it's not a huge effort, Bodo will probably fund the development, and Brock and Simon have availability to do it
  • We default to PyArrow dtypes starting in 3.0, but we provide a global flag (e.g. pandas.options.mode.use_legacy_typing)for users who aren't ready to use the Arrow types.

This wouldn't take us to a perfect situation, but it makes the transition path very easy for both users and us, and we can start focussing on pandas backed by Arrow, while gathering feedback from users quite fast without creating them any inconvenient (other that adding a line of code setting the option). After getting the feedback we will be in a better position to decide on deprecating the legacy types, moving them to a separate repo, maintaining them forever...

@mroeschke
Copy link
Member

@simonjayhawkins

just to be clear, Int, Float, Boolean and pd.NA variants of StringArray only?

Yes and I'd say all variants of StringArray (ArrowExtensionArray supports all functionality of StringArray)

does that include numpy object type?

I would say yes.

@jbrockmendel
Copy link
Member

We default to PyArrow dtypes starting in 3.0, but we provide a global flag

3.0 is already huge (CoW, Strings) and way, way late. Let's not risk adding another year to that lateness.

@jbrockmendel
Copy link
Member

We deprecate the NaN behavior in Arrow types. Correct me if I'm wrong, but I think it's not a huge effort, Bodo will probably fund the development, and Brock and Simon have availability to do it

Deprecating that behavior is relatively easy (though I kind of expect @jorisvandenbossche to chime in that we're missing something important). The hard part is the step after that telling users they have to change all their existing ser[1] = np.nan to use pd.NA instead (and constructor calls, and csvs).

As to "huge effort", off the top of my head, some tasks that would be required in order to make pyarrow dtypes the default include:

a) making object, category, interval, period dtypes Just Work
b) changing the non-numeric Index subclasses to use pyarrow dtypes
c) updating all the tests to work with both/all backend global options. Even if all the non-test code is bug-free, this is months of work.
d) (longer-term) try to ameliorate the performance hit for axis=1 operations, arithmetic, etc

@datapythonista
Copy link
Member Author

Thanks @jbrockmendel, this is great feedback.

Do you think implementing the global option (without defaulting to arrow or fixing the above points) is reasonable for 3.0 (releasing it soon)?

For numpy nullable types, does that same list apply? Or just the work on missing value semantics is needed to consider them ready to be the default?

@jbrockmendel
Copy link
Member

Everything except the performance would need to be done regardless of the default.

@jorisvandenbossche
Copy link
Member

Trying to catch up with all discussions .. but already two quick points of feedback:

  1. Timing:

We default to PyArrow dtypes starting in 3.0

3.0 is already huge (CoW, Strings) and way, way late. Let's not risk adding another year to that lateness.

I agree with @jbrockmendel here. Personally I would certainly not consider any of this for 3.0. There is still lots of work to do before we can even consider making the pyarrow dtypes the default (right now users cannot even try to this out fully), which I think will easily take at least a year.
(and saying that as the person who caused a year of delay for 3.0 thinking we could "quickly" include the string dtype feature in pandas 3.0 at a point where we were otherwise already ready to release ...)

  1. General approach of the "PyArrow dtypes":

Personally, I would prefer that we go for the model chosen for the StringDtype, i.e. not necessarily the fact that there are multiple backends (or the na_value), but I mean the fact that we went for a pd.StringDtype() as the default dtype and not for pd.ArrowDtype(pa.large_string()).
We should make good use of the Arrow type system, but IMO we 1) should not necessarily expose all of its details to our users (e.g. string vs large_string vs string_view, but have a default "string" dtype (with potentially options for power users than want to customized it)), and so I think we should hide some implementation details of (Py)Arrow under a nice "pandas logical dtype" system (i.e. @WillAyd's PDEP), and 2) by having separate classes per "type" of dtype, we can also provide a better user experience (e.g. a datetime/timestamp dtype will have a timezone attribute, but numeric dtypes don't need this).

As another example of this, I think we should create a variant of pd.CategoricalDtype that is nullable and uses pyarrow under the hood, and not have people rely on the fact that this translates to pd.ArrowDtype(pa.dictionary(...))
(in the line of what Brock said above about "making object, category, interval, period dtypes Just Work")

@simonjayhawkins
Copy link
Member

Thanks @jorisvandenbossche

2. General approach of the "PyArrow dtypes":

This aligns with my expectation of the future direction of pandas. Do we have any umbrella discussion anywhere that could be used as the basis of a PDEP so this is on the roadmap. (note: I'm not in any way volunteering to take that on)

I don't think PDEP-16 covers the using of pyarrow as a backend only and hiding the functionality behind our pandas nullable types as it is focused on the missing value semantics. Now, I appreciate that is part of that plan, but is not the plan as a roadmap item.

But to be fair, ArrowDType was introduced without a PDEP or IMO a roadmap item that covered what was implemented. As said above, these types have been promoted at conferences etc and are potentially being actively used. This is now IMO pulling the development team in two different directions. For instance, I am very frustrated at the ArrowDType for string being available to users as it has caused so much confusion even within the team. IMO we don't need this as "experimental" since we have the pandas nullable dtype available. However, I also appreciate that for evaluating an experimental dtype system (not just individual dtypes) we should perhaps include it. So I am conflicted.

Now we are where we are. And we probably now have as many (or more) core-devs that appear to favor the ArrowDtype system. So managing that is now likely as difficult as the technical discussions themselves.

@mroeschke please don't take this as criticism as the work you have done is great (for the whole project, the maintenance and not just the Arrow work) and any issues that have arisen are systematic failures. As @WillAyd pointed out the PDEP process needs to be able to allow to make decisions without every minute detail ironed out before the PDEP is approved.

@datapythonista
Copy link
Member Author

Thanks all for the feedback. Personally feels like while opening so many discussions have been a bit chaotic (sorry about that as I started or reopened most), I think it's been very productive.

I'd like to know how do you feel about trying to unify all the type system topics into a single PDEP that broadly defines a roadmap for the next steps and final goal. Maybe I'm too optimistic, but I think there is agreement in most points, and it's just the details that need to be decided. Some of the things that I think could go into the PDEP and there is mostly agreement:

  • "Final" pandas dtypes backed by arrow (probably also agreement on not exposing it to the user)
  • Eventually making PyArrow a required dependency (unclear when)
  • Implementing a global option to at least change between the status quo and the final dtype system (maybe other options)
  • The PyArrow dtypes still need work before being the default/final:
    • NA semantics
    • Functionality not yet implemented
    • Bugs / undesired behavior
    • Performance
  • We should be testing the final PyArrow type system via our test suite with the global option (this can lead the work on what needs to be fixed)
  • There should be a transition path from the status quo to the final types system that needs to be defined
  • We need to take into consideration that we have around 500 work hours of funding, and that if things don't change, it doesn't seem likely that anyone will put the huge amount of time this needs to finish the roadmap

Does people feel like it make sense to write a PDEP with the general picture of this roadmap (that should superseed this issue, PDEP-15 and probably others)? Does anyone want to lead the effort to get this PDEP done?

@mroeschke
Copy link
Member

please don't take this as criticism as the work you have done is great

None taken :) . Yes, the development of these types preceded our agreed-upon approval process for larger pandas enhancements.

Personally, I would prefer that we go for the model chosen for the StringDtype

FWIW, the early model of ArrowDtype almost followed a variant of the StringDtype model (#46972). I have come to like the benefits of the current ArrowDtype (probably with some bias), but I'll argue my case in a different forum.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Arrow pyarrow functionality Needs Discussion Requires discussion from core team before further action
Projects
None yet
Development

No branches or pull requests

7 participants