1

A lot of numerical solvers (such as this one), for some reason, only supports solving Ax=b where x, b are vectors.

What if I want to solve AX = B, where X and B are matrices.

I know numerically one should almost never invert a matrix. and there are special reasons I want to use an iterative solver. For example I have a reasonable guess of the solution. I just want to find a way to generalize the Ax=b solver (where x,b are vectors) to AX=B (where X, B are matricies).

I can loop over the vectors, but looping would have overhead, and I dont want to directly use numpy.linalg.solve because I do want to use the special feature of Ax=b solver. Is there a feature in python that speeds this up?

5
  • 1
    Reading the docs, it looks like the SciPy library already supports a matrix for A and a vector for B. Can you clarify the question and include an error message? Commented Jul 25, 2024 at 15:41
  • @darthbith I want to solve B as a matrix though? Commented Jul 25, 2024 at 16:09
  • So A is a 3-D matrix then? Commented Jul 25, 2024 at 17:00
  • @darthbith, I think he wants the regular 2d A case, but with multiple x vector solutions (effectively in parallel). Commented Jul 26, 2024 at 0:02
  • @hpaulj Possibly, but there are number of approaches to take, depending on what the OP actually wants, but we don't have a clear answer Commented Jul 26, 2024 at 13:56

2 Answers 2

0

There is no inherent reason iterative solvers cannot be extended to handle batched right-hand sides. An immediate question is of course how to define the stopping criterion: different columns of B may need different number of iterations. This is likely one reason why scipy.sparse.linalg solvers only accept a single r.h.s.

That said, these solvers are pure python, so you can rather easily drop them into your code and adapt to your specific problem.

Sign up to request clarification or add additional context in comments.

2 Comments

but I want to avoid the python looping overhead?
Taylor, is that python looping overhead killing you? Often a few (relatively speaking) iterations on a complex task are optimal. What we try to avoid is many iterations on a simple task. Also check the source code of your task - if available.
-1

Wanting to solve AX=B when all terms are matrices is a red herring: you already have the answer. We can always move terms by left/right multiplying with an inverse leaving us with the form V = W, which doesn't need solving: V is W, we're already done.

If we have AX=B, and we know A and B and need to find X, then we just calculate A⁻¹B:

AX = B, thus
A⁻¹AX = A⁻¹B, thus
X = A⁻¹B, QED

Writing a function for that would probably even be overkill, but if you really wanted to, it's a one-liner (assuming you're already importing matmul and inv since you're working with matrices):

from numpy import matmul
from numpy.linalg import inv

def mat_solve_ax_is_b(A, B):
  return matmul(inv(A), B)

(of course, A needs to be invertible, but if you're working at the level where you're trying to solve Ax=b and AX=B, it feels reasonable to assume you're working with "well behaved" matrices)

7 Comments

numerically one should almost never invert a matrix. and there are special reasons I want to use an iterative solver. For example I have a reasonable guess of the solution
Cool: remember to put all of that in your post because the fewer details you give, the less useful answers you get. Be specific, get good answers.
got it, thanks ...
np.linalg.inv says it does the matrix solve, A X = I. I think the question here is whether any of the scipy.sparse iterative solvers handles matrices. Maybe not, since the optimal number of iterations could vary.
why not simply use np.linalg.solve(A, B) instead of reinventing the wheel?
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.