I remember reading why dry-transaction
was deprecated a few years ago, but now I can’t find that page.
I would also love to know what makes dry-operation
a good replacement.
I remember reading why dry-transaction
was deprecated a few years ago, but now I can’t find that page.
I would also love to know what makes dry-operation
a good replacement.
I haven’t looked at dry-operation but wrt dry-transaction the idea was that do notation from dry-monads is sufficient.
Thanks!
I read that dry-transaction
had some limitations and that the Do notation was far more flexible than dry-transaction’s simple “step” definitions while being just as easy to use.
I’m trying to explain to my team why the Do notation is preferred, and it would help to know what those limitations are and why the Do notation is more flexible.
@timriley, perhaps you could shed some light into this?
@wilsonsilva, the main reason we’re pursuing a successor is to move away from a class-level step API to an instance-level step API.
While the class-level API looked nice and friendly on the surface (and I do believe this was a big part in driving dry-transaction’s uptake), it had limitations that made it difficult to create transactions classes out of steps that might need different input arguments. With dry-transaction, every step’s input had to be exactly the output of the previous step. This is not helpful when you want to compose transactions from objects that might receive differently shaped inputs.
The dry-transaction approach for handling database transaction rollbacks (“around” steps) was also ungainly and hard to understand.
dry-monads Do notation handles both of these cases much more easily, but dry-monads is still a relatively low-level library.
So with dry-operation, we’re taking what’s great from dry-monads Do notation and evolving it into a turnkey experience, giving the user everything they need to build more flexible operation objects, while still providing a clear step-oriented API, along with built-in support support for database transactions provided by various ORMs (we have ROM and ActiveRecord support already).
The API in dry-operation will be quite similar, just at the instance-level, instead of the class-level. So a dry-transaction class like this:
class MyTransaction < Dry::Transaction
step :foo
step :bar
# ...
end
Can become this:
class MyOperation < Dry::Operation
def call(input)
output = step foo(input)
step bar(output)
end
# ...
end
The steps map directly across, just moved inside the call method. There is more explicit argument handling here, yes, but this is intentional: it means the user has a much clearer view and greater control over what values are going into each step.
Control over input args is especially valuable for when you’re building operations composed of objects (e.g. via the Hanami Deps
or another auto-inject mixin) that might be used directly in other circumstances. It means those other objects can expose whatever API is most natural for them, and they can still be used inside an operation class like this, because the author of that class gets to manually handle those input arguments.
We haven’t ported the dry-transaction step adapters yet, but those could come across into instance methods too. (We might wait for user feedback before doing these, since users can just as easily define their own step adapters by adding their own private instance methods to their base operation class).
Seeing the database integration also demonstrates the improvement in clarity.
Whereas before we had this:
class MyTransaction < Dry::Transaction
around :transaction, with: "transaction"
step :create_user
step :create_account
end
Now we can have this:
class MyOperation < Dry::Operation
def call(attrs)
transaction do
step create_user(attrs[:user])
step create_account(attrs[:account])
end
end
end
It’s much clearer where the transaction begins and ends, and gives the user better control over where they are used within their operations.
I hope this helps explain things!
Thanks for the detailed explanation. It is crystal clear now!