-
Notifications
You must be signed in to change notification settings - Fork 96
Description
Hello, I have a question about Non-dimensionalized to ask you.
The following is the original content of your paper.
It is well-known that, data normalization is an important pre-processing step in traditional deep learning, which typically
involves scaling the input features of a data-set so that they have similar magnitudes and ranges [55, 56]. However,
this process may not be generally applicable for PINNs as the target solutions are typically not available when solving
forward PDE problems. In such cases, it is important to ensure that the target output variables vary within a reasonable
range. One way to achieve this is through non-dimensionalization.
However, in the process of learning the sample code of ns_unsteady_cylinder, I found that the coords data (x,y) is still normalized before entering the network. So, what is the significance of the dimensionless coords data carried out earlier? In fact, after normalization, (x,y,t) all fall within the interval [0,1].
def neural_net(self, params, t, x, y):
t = t / self.temporal_dom[1] # rescale t into [0, 1]
x = x / self.L # rescale x into [0, 1]
y = y / self.W # rescale y into [0, 1]
inputs = jnp.stack([t, x, y])
outputs = self.state.apply_fn(params, inputs)
Looking forward to your reply, thanks.