Electromagnetic 4-Force using 4D Vector Product

It's a sum over all possible projections.

As for it not working with Minkowsky metric: do the 4D cross product and then impose the signature on the result by changing the 0th component's sign.
 
It's a sum over all possible projections.

As for it not working with Minkowsky metric: do the 4D cross product and then impose the signature on the result by changing the 0th component's sign.

Okay, I get it. It's not "exactly" a determinant but it's still useful. I'd call it something else - pick a name.

Here's a scenario for a thought experiment:

Start with a unit circle in the x-y plane, in the standard Cartesian basis. Now lift along the z axis which is the cross product of the basis vectors, by some amount (call it A). So you end up with a cylinder of length A (when using standard normalized basis).

There is no "matrix that gets you from a circle to a cylinder" (without changing the rules of algebra), unless you first specify the source in terms of homogeneous coordinates, or unless you can make the Cartesian assumption about the additional dimension (in which case you're inducing the basis vector by simply providing the coordinate).

Homogeneous coordinates are an interesting case. You can project and then un-project, but in this process the uniqueness is lost (points become lines).

True invertibility will preserve uniqueness, yes?
 
Type out the determinant containing the -t^2 term, maybe I can help you.

I call it something else: "determinant developed by row" (det_R) or "determinant developed by column" (det_C).
 
Last edited:
Type out the determinant containing the -t^2 term, maybe I can help you.

Oh - that's the norm, which depends on the metric. The usual formula is

L2^2 (x) = g x x

Which is the dot product of x with itself. Where g is the metric tensor, and the tensor summation is implied.

I call it something else: "determinant developed by row" (det_R) or "determinant developed by column" (det_C).

It's still a 3 dimensional determinant. It's a scaling factor for a volume in 3 dimensions.

It makes no sense to compare a 3-volume with a 4-volume, right? You'd have to use the trick with the cylinder, "extruding" it into the higher dimension. Then you can calculate its volume. In my example you can just multiply π r^2 by 8, in the general case it would be an integral.

Then to get back you can take a "slice" of the cylinder (dz), which gets you back to the circle with area π r^2. But notice it's not dividing by 8 (which is the inverse), which does not take you back to the circle (instead it takes you to a cylinder of length 1).
 
The paper in question has been accepted for publication by Fundamental Journal of Mathematical Physics.
 
I guess the nD cross product will give the oriented area spanned by the two vectors. See (at timestamp 14:03):



I also guess the nxm determinant would give you the n-1 or m-1 dimensional sub-volume.
 
Last edited:
No, a better guess is the nxm determinant m > n would give you the sum of n-dimensional oriented sub-volumes.
 
Last edited:
I discovered an area where this approach might be helpful.

It is the "multivariate" normal distribution, which is important in machine learning and data analysis.

Most people are familiar with the univariate Gaussian e ^ - (x^2 / s^2) where s^2 is the variance or "width" of the bell shaped curve, and if you want it to be a probability distribution you can normalize it so its integral adds up to 1, and you can shift over the mean by making the x term (x-u)^2. In univariate mode the normalizing factor is usually written as 1 / (s * sqrt(2π)), which just says the curve gets taller as it gets narrower.

In multivariate mode the normalizing factor has a determinant, it is the determinant of the covariance matrix. It is written as

1 / sqrt ( (2π)^n * | det(v) | )

where v is the covariance matrix and n is the dimensionality (which in this case is assumed and defined to result in a square covariance matrix).

There's a plethora of statistical methods involving "dimensionality reduction". For example principal component analysis is one such popular method, and it involves manipulating the covariance matrix in a way that might be amenable to the method in the OP.
 
Back
Top Bottom