Electromagnetic 4-Force using 4D Vector Product

talanum1

Senior Member
Joined
Jul 21, 2025
Messages
102
Reaction score
11
Points
46
I found a way to define the vector product of two 4-dimensional vectors. With this I was able to derive a simpler formula for the Electromagnetic 4-Force.

See: "Academia.edu" and search for "W.F. Esterhuyse Electromagnetic 4-Force using 4D Vector Product."
 
I found a way to define the vector product of two 4-dimensional vectors. With this I was able to derive a simpler formula for the Electromagnetic 4-Force.

Not sure about any of that but I found that if I stand facing into a good,
stiff 30 km wind turned about 15° to the CW and it is close
to one of the equinoxes, my nose whistles.
Apparently it is all due to
a deviated septum.

My dog likes it though.
 
I found a way to define the vector product of two 4-dimensional vectors.

How did you do it? Outer product? Wedge product? As you likely know there is no cross product in four dimensions.

With this I was able to derive a simpler formula for the Electromagnetic 4-Force.

Quaternions?

Hodge star?

See: "Academia.edu" and search for "W.F. Esterhuyse Electromagnetic 4-Force using 4D Vector Product."

Okay. I'm interested. I have applications for 4 dimensions.

You're not allowed to use exterior geometry in spacetime, right?

I have the opposite problem. What lives outside of my 4 dimensions is practically infinite dimensional, it has to do with partitions and configurations.
 
Not sure about any of that but I found that if I stand facing into a good,
stiff 30 km wind turned about 15° to the CW and it is close
to one of the equinoxes, my nose whistles.
Apparently it is all due to
a deviated septum.

My dog likes it though.
Bwahahaha
 
I found a way to define the vector product of two 4-dimensional vectors. With this I was able to derive a simpler formula for the Electromagnetic 4-Force.

See: "Academia.edu" and search for "W.F. Esterhuyse Electromagnetic 4-Force using 4D Vector Product."

Wouldn't an Electromagnetic 4-Force be a fourth-ordered tensor? ... asking for a friend ...

Vector products can be defined anyway you want ... the question is "of what use" is the operation ...
 
Last edited:
Wouldn't an Electromagnetic 4-Force be a fourth-ordered tensor? ... asking for a friend ...

Vector products can be defined anyway you want ... the question is "of what use" is the operation ...
I watched a two hour seminar on fourth dimensional vector equations. I'm not particularly a math wimp but this one had me scratching my head I can tell you. Aside from the fact that we really can't even prove anything more than three dimensions.... If we could what the hell would we do with the knowledge?
 
I watched a two hour seminar on fourth dimensional vector equations. I'm not particularly a math wimp but this one had me scratching my head I can tell you. Aside from the fact that we really can't even prove anything more than three dimensions.... If we could what the hell would we do with the knowledge?

We're just a couple of Classical guys living in a Modern world ...
 
I watched a two hour seminar on fourth dimensional vector equations. I'm not particularly a math wimp but this one had me scratching my head I can tell you. Aside from the fact that we really can't even prove anything more than three dimensions.... If we could what the hell would we do with the knowledge?

It is just offal--- just last week people in this place were arguing that time did not really exist, that it was all just part of the illusory nature of space! Now we are calculating the vectoral analysis of something many here disputes does not even really exist?

This is going to take me some time to sort out. Whoops!
 
It is just offal--- just last week people in this place were arguing that time did not really exist, that it was all just part of the illusory nature of space! Now we are calculating the vectoral analysis of something many here disputes does not even really exist?

This is going to take me some time to sort out. Whoops!
I wish to hell time didn't exist..... There are a few things that I used to really like doing....😎
 
I wish to hell time didn't exist..... There are a few things that I used to really like doing....😎

Well then, thank your lucky stars you still have TIME to do them again! 🌠

Or TIME to find something new you like even better!
 
How did you do it? Outer product? Wedge product? As you likely know there is no cross product in four dimensions.
There is a cross product in any dimension, it just don't have all the properties of the 3D cross product.
Quaternions?

Hodge star?
None of those: I define the determinant of a nxm (m > n) matrix A as a sum over nxn matrices for terms consisting of all combinations of deleted columns such that n remain, and then reversing the term's sign if an odd amount of column swaps were required to bring all n columns to leftmost. But there is an alternative method.
Wouldn't an Electromagnetic 4-Force be a fourth-ordered tensor? ... asking for a friend
Yes, but you can also call it a vector.
Vector products can be defined anyway you want ... the question is "of what use" is the operation ...
Used to compute a vector perpendicular to two given vectors with size special to the area spanned by the two vectors (if you can call it "area").
 
Last edited:
There is a cross product in any dimension, it just don't have all the properties of the 3D cross product.

No, a cross product is defined to yield a vector. That only works in 3 and 7 dimensions. And 7 is funky, it only works half way (that's why the physicists don't use it).

None of those: I define the determinant of a nxm (m > n) matrix A as a sum over nxn matrices for terms consisting of all combinations of deleted columns such that n remain, and then reversing the term's sign if an odd amount of column swaps were required to bring all n columns to leftmost.

Oh.

Is there an algebraic proof for this?

But there is an alternative method.

Yes, but you can also call it a vector.

Used to compute a vector perpendicular to two given vectors with size special to the area spanned by the two vectors (if you can call it "area").

If you provide THREE vectors in a 4d space and put them in a matrix, you can add the fourth and take a determinant. In that case you get a volume.

If you only provide two vectors you'll get a 2d subspace which is inadequate to uniquely determine an orthogonal. You can choose or create a vector within that subspace and in some cases you'll get the right one, but it doesn't work in 4 dimensions.
 
No, a cross product is defined to yield a vector.

The cross product operation is anti-communitive ... we can treat the results as a vector, and this is very useful when dealing with rotational motion ... but I believe it's an element of an anti-communitive ring rather than a vector space ...

Have you read the paper in the link? ... the proof should be there ...
 
The 3x4 determinant is defined correctly: then the cross product follows and must be correct. See thread #1 for where to find a proof.

It must not satisfy: (axb)xc = (a.c)b+ (b.c)a as this was used in the proof that the cross product is just valid in 3D.
 
The 3x4 determinant is defined correctly: then the cross product follows and must be correct. See thread #1 for where to find a proof.

It must not satisfy: (axb)xc = (a.c)b+ (b.c)a as this was used in the proof that the cross product is just valid in 3D.

Sorry, not buying it. You're violating the basic rules of algebra.

The determinant of a product is the product of determinants. This relationship fails entirely for non square matrices.

Determinants scale linear transformations. You can't scale one dimension more than the other, using a single number.

Cofactor expansion is only possible with square matrices.

You need to provide a proof of your math
 
Here is the proof: it is almost the same as in the textbook: (it's in TeX format)

\parbox{14cm}{Alternatively for a nxm determinant $m > n$, one my choose to write this as a sum of nxn determinants as follows:}
\paragraph{}
\parbox{14cm}{1. Write it as a sum of nxn determinants for all combinations of deleted columns such that n remains, with the columns explicitly deleted}
\paragraph{}
\parbox{14cm}{2. Swap columns with deleted columns until all columns are at leftmost.}
\paragraph{}
\parbox{14cm}{3. Change a term's sign to negative if an odd amount of swaps were required.}
\paragraph{}
\parbox{14cm}{3.1 Theorem}
\paragraph{}
\parbox{14cm}{For a mxn matrix \mathbf{A} with $n>m$ the matrix has a right inverse if $det_R (A) <>0$ and $A^{-1}= (\frac{1}{det_R (A)})A_{CF}^T$,}
\paragraph{}
\parbox{14cm}{where $A_{CF}^T$ is the co-factor matrix of A transposed.}
\paragraph{}
\parbox{14cm}{Proof:}
\paragraph{}
\parbox{14cm}{We must prove: $G = AA^{-1} = A(\frac{1}{det_R (A)})A_{CF}^T =(\frac{1}{det_R (A)})AA_{CF}^T = I_{n\times n}$.}
\paragraph{}
\parbox{14cm}{We have, by definition of matrix multiplication:}
\paragraph{}
\parbox{14cm}{$g_{kl}= \frac{1}{det_R(A)}\Sigma_{s=1}^n a_{ls}A_{ks}$},
\paragraph{}
\parbox{14cm}{ For l = k the last sum is the development $D = det_R (A)$ by the k'th row. Hence:}
\paragraph{}
\parbox{14cm}{$g_{kk} = \frac{1}{det_R (A)}\Sigma_{s=1}^n a_{ks}A_{ks} =1$}
\paragraph{}
\parbox{14cm}{so long as we develop the determinants in $det_R(A)$ and $A_{ks}$ also by row this holds.}
\paragraph{}
\parbox{14cm}{For $l <> k$ this sum is the dwvelopment by the k'th row of the determinant $D'$ obtained from $D$ by replacing the k'th column of $D$ with the l'th column of $D$. This has two columns identical and is zero because we can write the determinant as the sum of $m-1 \times m-1$ determinants with two columns identical and $n-m$ columns deleted, where the column deletion does not include any of the repeated columns plus: pairs of $D'$ with one of the repeated columns deleted. This determinants looks like:}
\paragraph{}
\parbox{14cm}{$\begin{vmatrix}
A_{11}& A_{11} &A_{31} &... &A_{n-m,1}&[]&...&[]\\
A_{12}&A_{12}&A_{32}&...&A_{n-m,2}&[]&...&[]\\
...
\end{vmatrix}$}
\paragraph{}
\parbox{12cm}{where each such determinant will give rise to two determinants:}
\paragraph{}
\parbox{14cm}{$\begin{vmatrix}
A_{11} & [] &A_{31} &... &A_{n-m,1}&[]&...&[]\\
A_{12} & [] &A_{32}&...&A_{n-m,2}&[]&...&[]\\
...
\end{vmatrix}$}
\paragraph{}
\parbox{14cm}{$+
\begin{vmatrix}
[]& A_{11} &A_{31} &... &A_{n-m,1}&[]&...&[]\\
[]&A_{12}&A_{32}&...&A_{n-m,2}&[]&...&[]\\
...
\end{vmatrix}$}
\paragraph{}
\parbox{14cm}{and by our rule for changing a term's sign, these produce the same terms, just with opposite signs. This takes care of the case where the two identical columns have an even amount of columns between them. For the case when the two identical columns have an odd amount of columns between them we don't have opposite signs after shifting all the deleted columns to rightmost, but then we need an odd amount of column swaps to make the determinants identical. QED.}

Here is how we develop cofactors:

$\begin{vmatrix}
a_11&a_12&a_13&a_14\\
a_21&a_22&a_23&a_24\\
a_31&a_32&a_33&a_34\\
\end{vmatrix}$

Then cofactor: A_12 equals

$\begin{vmatrix}
a_21&a_23&a_24\\
a_31&a_33&a_34\\
\end{vmatrix}$

This goes to a sum of 2x2 matrices:

$\begin{vmatrix}
a_21&a_23&[]\\
a_31&a_#3&[]
\end{vmatrix}+
\begin{vmatrix}
a_21&[]&a_24\\
a_31&[]&a_34
\end{vmatrix}+
\begin{vmatrix}
[]&a_23&a_24\\
[]&a_33&a_34
\end{vmatrix}$

and the second term's sign must be inverted.
 
15th post
Here is the proof: it is almost the same as in the textbook: (it's in TeX format)

\parbox{14cm}{Alternatively for a nxm determinant $m > n$, one my choose to write this as a sum of nxn determinants as follows:}
\paragraph{}
\parbox{14cm}{1. Write it as a sum of nxn determinants for all combinations of deleted columns such that n remains, with the columns explicitly deleted}
\paragraph{}
\parbox{14cm}{2. Swap columns with deleted columns until all columns are at leftmost.}
\paragraph{}
\parbox{14cm}{3. Change a term's sign to negative if an odd amount of swaps were required.}
\paragraph{}
\parbox{14cm}{3.1 Theorem}
\paragraph{}
\parbox{14cm}{For a mxn matrix \mathbf{A} with $n>m$ the matrix has a right inverse if $det_R (A) <>0$ and $A^{-1}= (\frac{1}{det_R (A)})A_{CF}^T$,}
\paragraph{}
\parbox{14cm}{where $A_{CF}^T$ is the co-factor matrix of A transposed.}
\paragraph{}
\parbox{14cm}{Proof:}
\paragraph{}
\parbox{14cm}{We must prove: $G = AA^{-1} = A(\frac{1}{det_R (A)})A_{CF}^T =(\frac{1}{det_R (A)})AA_{CF}^T = I_{n\times n}$.}
\paragraph{}
\parbox{14cm}{We have, by definition of matrix multiplication:}
\paragraph{}
\parbox{14cm}{$g_{kl}= \frac{1}{det_R(A)}\Sigma_{s=1}^n a_{ls}A_{ks}$},
\paragraph{}
\parbox{14cm}{ For l = k the last sum is the development $D = det_R (A)$ by the k'th row. Hence:}
\paragraph{}
\parbox{14cm}{$g_{kk} = \frac{1}{det_R (A)}\Sigma_{s=1}^n a_{ks}A_{ks} =1$}
\paragraph{}
\parbox{14cm}{so long as we develop the determinants in $det_R(A)$ and $A_{ks}$ also by row this holds.}
\paragraph{}
\parbox{14cm}{For $l <> k$ this sum is the dwvelopment by the k'th row of the determinant $D'$ obtained from $D$ by replacing the k'th column of $D$ with the l'th column of $D$. This has two columns identical and is zero because we can write the determinant as the sum of $m-1 \times m-1$ determinants with two columns identical and $n-m$ columns deleted, where the column deletion does not include any of the repeated columns plus: pairs of $D'$ with one of the repeated columns deleted. This determinants looks like:}
\paragraph{}
\parbox{14cm}{$\begin{vmatrix}
A_{11}& A_{11} &A_{31} &... &A_{n-m,1}&[]&...&[]\\
A_{12}&A_{12}&A_{32}&...&A_{n-m,2}&[]&...&[]\\
...
\end{vmatrix}$}
\paragraph{}
\parbox{12cm}{where each such determinant will give rise to two determinants:}
\paragraph{}
\parbox{14cm}{$\begin{vmatrix}
A_{11} & [] &A_{31} &... &A_{n-m,1}&[]&...&[]\\
A_{12} & [] &A_{32}&...&A_{n-m,2}&[]&...&[]\\
...
\end{vmatrix}$}
\paragraph{}
\parbox{14cm}{$+
\begin{vmatrix}
[]& A_{11} &A_{31} &... &A_{n-m,1}&[]&...&[]\\
[]&A_{12}&A_{32}&...&A_{n-m,2}&[]&...&[]\\
...
\end{vmatrix}$}
\paragraph{}
\parbox{14cm}{and by our rule for changing a term's sign, these produce the same terms, just with opposite signs. This takes care of the case where the two identical columns have an even amount of columns between them. For the case when the two identical columns have an odd amount of columns between them we don't have opposite signs after shifting all the deleted columns to rightmost, but then we need an odd amount of column swaps to make the determinants identical. QED.}

Here is how we develop cofactors:

$\begin{vmatrix}
a_11&a_12&a_13&a_14\\
a_21&a_22&a_23&a_24\\
a_31&a_32&a_33&a_34\\
\end{vmatrix}$

Then cofactor: A_12 equals

$\begin{vmatrix}
a_21&a_23&a_24\\
a_31&a_33&a_34\\
\end{vmatrix}$

This goes to a sum of 2x2 matrices:

$\begin{vmatrix}
a_21&a_23&[]\\
a_31&a_#3&[]
\end{vmatrix}+
\begin{vmatrix}
a_21&[]&a_24\\
a_31&[]&a_34
\end{vmatrix}+
\begin{vmatrix}
[]&a_23&a_24\\
[]&a_33&a_34
\end{vmatrix}$

and the second term's sign must be inverted.

"Swap columns with deleted columns", what does that mean?

Walk us through this. Here is a trivial scenario.

There are two spaces, a 3-space and a 4-space. We want mappings (matrices) between them.

We will use the case in your example, where m > n

This equates with a "projection" of 4 dimensions onto 3. So we will call our 4 dimensional vector P, consisting of

x
y
z
t

Here is a 3x4 matrix:

1 0 0 2
0 1 0 3
0 0 1 4

Let's call it M. To apply the transformation we must multiply M times P

So now we want the determinant of M.

Your method says: make all combinations of deleted columns such that 3 remain. The result is (working backwards through the columns):

1 0 0
0 1 0
0 0 1

deleted: column 4

2
3
4

1 0 2
0 1 3
0 0 4

deleted: column 3

0
0
1

1 0 2
0 0 3
0 1 4

deleted: column 2

0
1
0

0 0 2
1 0 3
0 1 4

deleted: column 1

1
0
0

That's all the combinations.

Now what? What am I supposed to swap, and why?
 
You must delete the columns explicitly by using the sign for "deleted entry" = []. Then the determinant of the given matrix is computed as:
det_R(
1002
0103
0014) = det_R(

100[]
010[]
001[] +

10[]2
01[]3
00[]4 +

1[]02
0[]03
0[]14 +

[]002
[]103
[]014) = det_R(

100
010
001 -

102
013
004 +

102
003
014 -

002
103
014)

Note the two negative signs. Why, because the proof requires it.
 
Last edited:
Thank you. I'll have to play with this for a while.

Just now I'm tied up with forward and inverse kinematics, which is my 4d application.
 
Okay, so we have 1 - 4 + (-3) - 2. Yes?

Your claim is the determinant of matrix M is -8.

Here's the problem:

This only works when the 3-d subspace of the destination uses the same metric as the source.

Because the scaling can only be determined that way. The algebra says the norm of the result has to be -8 times the norm of the source. So the contribution of dt has to be 0 or it has to be identically 0. Because the (L2) norm of the source vector has a -t^2 term.

So this will work in Minkowski space if you're only dealing with the spatial dimensions, but it won't work if you include time. (Even if you hold dt at 0, because then you have no 4-space volume at all).

As near as I can discern, your method engages in some kind of oddball "projection" (in the geometric sense), which I don't have time to figure out right now. The method of deleting columns is effectively just removing dimensions, projecting 4d down to 3 along each principal (basis) axis.
 
Back
Top Bottom