Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
pugs
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
code
pugs
Commits
a5f17ba9
Commit
a5f17ba9
authored
Aug 7, 2022
by
Stéphane Del Pino
Browse files
Options
Downloads
Patches
Plain Diff
Take into account Xavier's corrections
parent
cf69a4b2
Branches
Branches containing commit
Tags
Tags containing commit
1 merge request
!149
Feature/user doc
This commit is part of merge request
!149
. Comments created here will be created in the context of that merge request.
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc/userdoc.org
+165
-131
165 additions, 131 deletions
doc/userdoc.org
with
165 additions
and
131 deletions
doc/userdoc.org
+
165
−
131
Edit
View file @
a5f17ba9
...
...
@@ -149,7 +149,7 @@ already be discussed.
- There is no predefined constant in ~pugs~. Here a value is provided
for ~pi~.
- There are two kinds of variable in ~pugs~: variables of basic types
and variables of high-level types. Th
is
two kinds of variable behave
and variables of high-level types. Th
ese
two kinds of variable behave
almost the same but one must know their differences to understand
better the underlying mechanisms and choices that we made. See
sections [[basic-types]] and [[high-level-types]] for details.
...
...
@@ -181,13 +181,13 @@ boundary conditions, equations of state, source terms for a specific
model. Choosing a numerical method or even more, setting the model
itself, is common in large codes.
In ~pugs~, all these "parameters" are set through a
DSL[fn:DSL-def]. Thus, when ~pugs~
is launched, it actually executes a
provided script. A ~C++~ function is associated to each instruction
of
the script. The ~C++~ components of
~pugs~ are completely unaware one of
the others. ~pugs~ interpreter is
responsible for the data flow between
the components: it manages the
data transfer between those ~C++~
components and ensures that the
workflow is properly defined.
In ~pugs~, all these "parameters" are set through a
DSL. Thus, when ~pugs~
is launched, it actually executes a
provided script. A ~C++~ function is
associated to each instruction of the script. The ~C++~ components
of
~pugs~ are completely unaware one of
the others. ~pugs~ interpreter is
responsible for the data flow between
the components: it manages the
data transfer between those ~C++~
components and ensures that the
workflow is properly defined.
**** Why?
...
...
@@ -200,9 +200,9 @@ There are lots of reasons not to use data files. By data file, we
refer to a set of options that describe physical models, numerical
methods or their settings.
- Data files are not flexible. This implies
i
n the one hand that
- Data files are not flexible. This implies
o
n the one hand that
application scenarios must be known somehow precisely to reflect
possible option combinations and
i
n the other hand even defining a
possible option combinations and
o
n the other hand even defining a
specific initial data may require the creation of a new option and
its associated code (in ~C++~ for instance). \\
Usually, the last point is addressed by adding a local interpreter
...
...
@@ -215,7 +215,7 @@ methods or their settings.
- Generally data files become rapidly obsolete. An option was not the
right one, or its type changed to allow other contexts... This puts
pressure on the user.
- Even worst, option
s
meaning can depend on other
- Even worst, option meaning
s
can depend on other
options. Unfortunately, this happens commonly. For instance, a
global option can change implicitly the treatment associated to
another one. This is dangerous since writing or reading the data
...
...
@@ -237,10 +237,10 @@ files or scripts), but it presents several drawbacks.
things that should not be changed.
- Again, one can easily have access to irrelevant options and it
requires a great knowledge of the code to find important ones.
- With th
at regar
d, defining a simulation properly can be a difficult
- With th
is in min
d, defining a simulation properly can be a difficult
task. For instance, in the early developments of ~pugs~ (when it was
just a raw ~C++~ code) it was tricky to change boundary conditions for
coupled physic
s.
multiphysics problem
s.
- Another difficulty is related to the fact that code's internal API
is likely to change permanently in a research code. Thus valid
constructions or settings may become rapidly obsolete. In other
...
...
@@ -259,7 +259,7 @@ solution to all problems. However, it offers some advantages.
- It allows to structure the code in the sense that new developments
have to be designed not only focusing on the result but also on the
way it should be used (its interactions with the scripting language).
- In the same
idea
, it provides a framework that drives the desired
- In the same
vein
, it provides a framework that drives the desired
principle of "do simple things and do them well".
- There are no hidden dependencies between numerical options: the DSL
code is easier to read (than data files) and is less likely to
...
...
@@ -292,10 +292,10 @@ developer, by reading it, the user will have a better understanding of
the development choices and the underlying policy that the code
follows.
Actually the development framework imposed by the DSL
tends to
guide
writing of new methods.
Actually the development framework imposed by the DSL
is a
guide
line
for
writing of new methods.
- In the process of writing a *new numerical method
s
*, one must create
- In the process of writing a *new numerical method*, one must create
*new functions in the language*. Moreover, if a new method is close to
an existing one, it is generally *better* to use completely new
underlying ~C++~ code than to patch existing methods. Starting from a
...
...
@@ -307,7 +307,7 @@ writing of new methods.
numerical method and the *design* of the code.
- From the computer science point of view, early design for new
numerical methods is generally wrong: usually one cannot
anticipate precisely enough
eventual
problems or method
anticipate precisely enough
possible
problems or method
corrections.
- It is much more difficult to introduce bugs in existing methods,
since previously validated methods are unchanged!
...
...
@@ -324,7 +324,7 @@ writing of new methods.
deteriorated. At this time, it is likely that the numerical method
design is finished, thus (re)designing the source code makes more
sense.
- Another consequence is that utilities are not
be
developed again and
- Another consequence is that utilities are not developed again and
again.
- This implies an /a priori/ safer code: utilities are well tested and
validated.
...
...
@@ -332,12 +332,12 @@ writing of new methods.
- The code of numerical methods is not polluted by environment
instructions (data initialization, error calculation,
post-processing,...)
- The counterpart is somehow classical.
I
n the one hand, the
- The counterpart is somehow classical.
O
n the one hand, the
knowledge of existing utilities is required, this document tries
to address a part of it.
I
n the other hand, if the developer
to address a part of it.
O
n the other hand, if the developer
requires a new utility, a good practice is to discuss with the
other
one
s to check if it could benefit to them. Then one
can
determine if it should integrate rapidly or not the main
other
developer
s to check if it could benefit to them. Then one
can
determine if it should integrate rapidly or not the main
development branch.
***** Why not python or any other scripting language?
...
...
@@ -348,7 +348,7 @@ too much freedom: it is not easy to protect data. For instance in the
[[high-level-types]]). It is important since it prevents the user from
modifying data in inconsistent ways. Also, one must keep in mind that
constraining the expressiveness is actually a strength. As said
before, one can warranty co
her
enc
e
of the data, perform calculations
before, one can warranty co
nsist
enc
y
of the data, perform calculations
without paying attention to the parallelism aspects,... Observe that
it is not a limitation: if the DSL's field of application needs to be
extended, it is always possible. But these extensions should never
...
...
@@ -363,7 +363,7 @@ Finally, python is ugly.
*** A high-level language
Following the previous discussion, the reader should now understand
the motivations that drove the design choices that conduct to build
the motivations that drove the design choices that conduct
ed
to build
~pugs~ as a ~C++~ toolbox driven by a user friendly language.
#+begin_verse
...
...
@@ -409,7 +409,7 @@ define high-level optimizations.
~pugs~ script.
Another benefit of not providing low-level instructions is that the
scripts are
more easy
to write and read, and it is more difficult to
scripts are
easier
to write and read, and it is more difficult to
write errors.
* Language
...
...
@@ -430,8 +430,8 @@ linefeed string (there is no character type in ~pugs~, just strings).
Actually, ~cout~ is itself a variable, we will come to this later.
~pugs~ is a strongly typed language. It means that
a variable *cannot*
change of typ
e in its lifetime.
~pugs~ is a strongly typed language. It means that
the type of a
variable *cannot* chang
e in its lifetime.
*** Declaration and affectation syntax
...
...
@@ -443,7 +443,7 @@ To declare a variable ~v~ of a given type ~V~, one simply writes
let v:V;
#+END_SRC
This instruction
is
read as
This instruction read
s
as
#+begin_verse
Let $v\in V$.
#+end_verse
...
...
@@ -589,7 +589,7 @@ we give a few examples.
#+END_SRC
#+results: out-of-scope-variable-use
****
V
ariable name *cannot* be reused in an enclosed scope
****
A v
ariable name *cannot* be reused in an enclosed scope
#+NAME: nested-scope-variable-example
#+BEGIN_SRC pugs-error :exports both :results output
{
...
...
@@ -613,9 +613,9 @@ read.
*** Basic types<<basic-types>>
Basic types in ~pugs~ are boolean (~B~), natural integers (~N~), integers
(~Z~),
real
(~R~), small vectors (~R^1~, ~R^2~ and ~R^3~), small
matrices (~R^1x1~, ~R^2x2~
and ~R^3x3~) and strings (~string~).
Basic types in ~pugs~ are boolean (~B~), natural integers (~N~), integers
(~Z~), real numbers
(~R~), small vectors (~R^1~, ~R^2~ and ~R^3~), small
matrices (~R^1x1~, ~R^2x2~
and ~R^3x3~) and strings (~string~).
#+BEGIN_note
Observe that while mathematically, obviously $\mathbb{R} = \mathbb{R}^1
...
...
@@ -624,14 +624,15 @@ in ~pugs~ and are *not implicitly* convertible from one to the other!
This may sound strange but there are few reasons for that.
- First, these are the reflect of internal ~pugs~ ~C++~-data types that
are used to write algorithms. In its core design pugs aim at writing
numerical methods generically with regard to the dimension. One of
the ingredients to achieve this purpose is to use dimension $1$
vectors and matrices when some algorithms reduce to dimension $1$
instead of ~double~ values. To avoid ambiguity that may arise in some
situations (this can lead to very tricky code), we decided to forbid
automatic conversions of these types with ~double~. When designing the
language, we adopted the same rule to avoid ambiguity.
are used to write algorithms. In its core design pugs aims at
writing numerical methods generically with regard to the
dimension. One of the ingredients to achieve this purpose is to use
dimension $1$ vectors and matrices when some algorithms reduce to
dimension $1$ instead of ~double~ values. To avoid ambiguity that may
arise in some situations (this can lead to very tricky code), we
decided to forbid automatic conversions of these types with
~double~. When designing the language, we adopted the same rule to
avoid ambiguity.
- A second reason is connected to the first one. Since ~pugs~ aims at
providing numerical methods for problems in dimension $1$, $2$ or
$3$, this allows to distinguish the nature of the underlying objects.
...
...
@@ -639,8 +640,8 @@ This may sound strange but there are few reasons for that.
defining a mesh in dimension $d$ are elements of $\mathbb{R}^d$,
- or that a velocity or a displacement are also defined as
$\mathbb{R}^d$ values.
Thus using ~R^1~ in dimension $1$ for this kind of data precise
their
nature in some sense .
Thus using ~R^1~ in dimension $1$ for this kind of data
makes
precise
their
nature in some sense .
#+END_note
**** Expression types
...
...
@@ -735,9 +736,9 @@ which is not a surprise. However, the use of the ~+=~ operator results
in the modification of the stored value. There is no copy.
Actually, this is not really important from the user point of
view. One just ha
ve
to keep in mind that, as it will be depicted
after,
high-level variables *are not mutable*: their values can be
*replaced* by
new ones but *cannot be modified*.
view. One just ha
s
to keep in mind that, as it will be depicted
below,
high-level variables *are not mutable*: their values can be
*replaced* by
new ones but *cannot be modified*.
*** Implicit type conversions<<implicit-conversion>>
...
...
@@ -869,10 +870,10 @@ are sorted by type of left hand side variable.
- ~R^2~: vector of dimension 2 ($\mathbb{R}^2$) left hand side variable.
| ~R^2 =~ allowed expression type |
|---------------------------------------------|
|---------------------------------------------
-
|
| ~R^2~ |
| ~0~ (special value) |
| list of 2 scalar (~B~, ~N~, ~Z~ or ~R~) expressions |
| list of 2 scalar
s
(~B~, ~N~, ~Z~ or ~R~) expressions |
An example of initialization using an $\mathbb{R}^2$ value or the special value ~0~ is
#+NAME: affectation-to-R2-from-list
#+BEGIN_SRC pugs :exports both :results output
...
...
@@ -971,6 +972,27 @@ are sorted by type of left hand side variable.
| ~R^2x2~ |
| ~R^3x3~ |
| ~string~ |
The stored value is the same as the output value described
above. Here is an example.
#+NAME: affectation-to-string-example
#+BEGIN_SRC pugs :exports both :results output
let s_from_B:string,
s_from_B = 2>1;
let s_from_R2:string,
s_from_R2 = [ -3.5, 1.3];
let s_from_R3x3:string,
s_from_R3x3 = [[ -3, 2.5, 1E-2],
[ 2, 1.7, -2],
[1.2, 4, 2.3]];
cout << "s_from_B = " << s_from_B << "\n";
cout << "s_from_R2 = " << s_from_R2 << "\n";
cout << "s_from_R3x3 = " << s_from_R3x3 << "\n";
#+END_SRC
the output is
#+RESULTS: affectation-to-string-example
***** List of defined operator ~+=~ for basic types.
...
...
@@ -1044,6 +1066,23 @@ are sorted by type of left hand side variable.
| ~R^2x2~ |
| ~R^3x3~ |
| ~string~ |
The concatenated value is the same as the output value described
above, for instance
#+NAME: concatenate-to-string-example
#+BEGIN_SRC pugs :exports both :results output
let s:string, s = "foo";
s += [ -3.5, 1.3, -2];
s += "_";
s += 1>2;
s += "_";
s += [[ -3, 2.5],
[ 2, 1.7]];
cout << "s = " << s << "\n";
#+END_SRC
the output is
#+RESULTS: concatenate-to-string-example
***** List of defined operator ~-=~ for basic types.
...
...
@@ -1181,8 +1220,8 @@ are sorted by type of left hand side variable.
Observe that for these small matrix types ($\mathbb{R}^{d\times d}$) the
construction ~A *= B;~ where ~B~ is a matrix of the same type as ~A~ is not
allowed. The main reason for that is that for $d>1$ this operation has
no interest
s
since it requires a temporary. One will see bellow that
it
is possible to write ~A = A*B;~ if needed.
no interest since it requires a temporary. One will see bellow that
it
is possible to write ~A = A*B;~ if needed.
#+END_note
- ~string~: the ~*=~ operator is not defined for left hand side string variables.
...
...
@@ -1234,7 +1273,7 @@ The ~not~, ~+~ and ~-~ operators apply to the *expression* on their right. ~++~
and ~--~ operators apply only to a *variable* that can be positioned
before (pre increment/decrement) or after the token (post
increment/decrement). These operators are also inspired from their ~C++~
counterparts for co
mmodity
.
counterparts for co
nvenience
.
The ~+~ unary operator is a convenient operator that is *elided* when
parsing the script.
...
...
@@ -1591,7 +1630,7 @@ the output is
*** High-level types<<high-level-types>>
Aside from the basic types described in the previous section, ~pugs~
also deals with "high-level" types. This term is
more to
underst
an
d as
also deals with "high-level" types. This term is
better
underst
oo
d as
"non-basic types". The ~pugs~ language is not object oriented to keep it
simple.
...
...
@@ -1620,7 +1659,7 @@ operators can never be applied to variables of these kinds
| ~*=~ | assignment by product |
| ~/=~ | assignment by quotient |
We conclude by stating that if access operator ~[]~ can
eventual
ly be
We conclude by stating that if access operator ~[]~ can
possib
ly be
defined for high-level types, it should be done with care. It is not
recommended.
...
...
@@ -1721,8 +1760,8 @@ It produces the following error
While the variable ~x~ is defined *before* ~y~, this kind of construction is
forbidden. From a technical point of view, this behavior would be easy
to change (allow to use the fresh value of ~x~ in the definition of ~y~),
but this make the code unclear and this is not the purpose of
compound
types.
but this make
s
the code unclear and this is not the purpose of
compound
types.
#+BEGIN_note
Observe that there is no implicit conversion when dealing with
...
...
@@ -1816,7 +1855,7 @@ boundary conditions to a method.
The ~pugs~ language supports classical statements to control the data
flow. For simplicity, these statements syntax follow their ~C++~
counterpart. The only statement that is not implemented in ~pugs~ is the
~switch...case~. This may change but
i
n the one hand, up to now it has
~switch...case~. This may change but
o
n the one hand, up to now it has
never been necessary (up to now, we did not encountered the need to
chain ~if...else~ statements), and on the other hand, keeping the
language as simple as possible remains the policy in ~pugs~ development.
...
...
@@ -2087,10 +2126,10 @@ act as operators.
#+BEGIN_note
Actually these functions are not strictly /pure functions/ in the
computer science sense. The reason is that they
can eventuall
y have
side
effects. As an example, it is possible to modify the random seed
used
by the code. In that case, the modified value is not a variable
of the
language itself but the internal random seed.
computer science sense. The reason is that they
ma
y have
side
effects. As an example, it is possible to modify the random seed
used
by the code. In that case, the modified value is not a variable
of the
language itself but the internal random seed.
#+END_note
*** Implicit type conversion for parameters and returned values
...
...
@@ -2167,7 +2206,7 @@ Using compound types as input and output, one can write
This meaningless example produces the following result.
#+results: R22-R-string-to-R-string-function
**** Lifetime of function
s'
arguments
**** Lifetime of function arguments
The arguments used to define a function are *local* variables that exist
only during the evaluation of the function.
...
...
@@ -2209,7 +2248,7 @@ in function expressions.
Running the example, one gets
#+results: non-arg-variables-in-functions
While the function itself is a constant object, one sees that since
the value of ~a~ is changed, the
value
function is implicitly
the value of ~a~ is changed, the function
value
is implicitly
modified. /This is a dangerous feature and should be avoided!/
Since functions themselves are variables one can use functions in
...
...
@@ -2231,7 +2270,7 @@ output since ~cout~ does not handle compound types output. One gets
**** Lifetime of user-defined functions
Since functions are somehow variables, the lifetime of functions
follows
the
similar rules.
follows similar rules.
Let us give an example
#+NAME: functions-lifetime
...
...
@@ -2282,9 +2321,9 @@ produces the following compilation time error
*** Builtin functions<<builtin-functions>>
In ~pugs~ language, builtin functions are ~C++~ pieces of code that can be
called in scripts. Ther
e
usage is very similar to user-defined
called in scripts. The
i
r usage is very similar to user-defined
functions. They differ from user-defined functions in three points.
- Builtin functions
can
have no parameter or no returned value.
- Builtin functions
may
have no parameter or no returned value.
- Builtin functions are polymorphic. More precisely, this means that
the signature of a builtin function is also defined by its expected
argument types.
...
...
@@ -2294,7 +2333,7 @@ functions. They differ from user-defined functions in three points.
(actually, this is not a limitation since it is trivial to embed a
builtin function into a user-defined one).
Here is a simple example of builtin function embedd
ing
in a user
Here is a simple example of builtin function embedd
ed
in a user
function.
#+NAME: builtin-function-embedding
#+BEGIN_SRC pugs :exports both :results output
...
...
@@ -2323,7 +2362,7 @@ mathematical functions, one writes in the preamble of the script
#+BEGIN_warning
A work in progress
- At the time of writing this documentation, one should note that
module inter-dependencies
is
still not implemented.
module inter-dependencies
are
still not implemented.
- Also, (and especially with regard to the ~scheme~ module), module
contents are likely to change and to be reorganized.
- Finally it is almost sure that modules will be equipped with a
...
...
@@ -2332,7 +2371,7 @@ A work in progress
be more natural.
#+END_warning
One can access
to
the list of available modules inside the language.
One can access the list of available modules inside the language.
#+NAME: get-available-modules
#+BEGIN_SRC pugs :exports both :results output
cout << getAvailableModules() << "\n";
...
...
@@ -2386,7 +2425,7 @@ operator ~<<~ is an ~ostream~, the result of the operation is also an
One can overload the ~ostream <<~ construction for high-level types.
Other variables of the type ~ostream~ can be created (in order to write
to files for instance) as
one should
see below.
to files for instance) as
we will
see below.
**** ~core~ provided functions
...
...
@@ -2422,8 +2461,7 @@ name of the function and its input and output sets.
#+BEGIN_warning
Observe that this function does not provide the list of operators that
are defined in the module (eventually associated to the defined
types).
are defined in the module (possibly associated to the defined types).
#+END_warning
***** ~getPugsBuildInfo: void -> string~
...
...
@@ -2483,10 +2521,8 @@ name existed, it is *erased*.
let fout:ostream, fout = ofstream("filename.txt");
fout << [1,2] << " is a vector of R^2\n";
#+END_SRC
Running this example produces no output
#+RESULTS: ofstream-example
But a file is created (in the execution directory), with the name
~"filename.txt"~. Its content is
Running this example produces no output but a file is created (in the
execution directory), with the name ~"filename.txt"~. Its content is
#+NAME: cat-filename-txt
#+BEGIN_SRC shell :exports results :results output
cat filename.txt
...
...
@@ -2557,9 +2593,9 @@ their ~C++~ man pages for details.
#+END_note
#+BEGIN_note
Let us comment the use of the ~pow~ function. Actually one can wonder
Let us comment
on
the use of the ~pow~ function. Actually one can wonder
why we did not use a syntax like ~x^y~? The reason is that if
mathematically ${x^y}^z = x^{(y^z)}$, many software treat it (by mistake)
mathematically ${x^y}^z = x^{(y^z)}$, many software
s
treat it (by mistake)
as ${(x^y)}^z$. Thus, using the ~pow~ function avoids any confusion.
#+END_note
...
...
@@ -2581,7 +2617,7 @@ a ~mesh~ that is either designated by an integer or by a ~string~.
A ~boundary~ can designate a set of nodes, edges or faces. The ~boundary~
(descriptor) itself is not related to any ~mesh~, thus the nature of the
~boundary~ is precise
d
when it is used with a particular ~mesh~.
~boundary~ is
made
precise when it is used with a particular ~mesh~.
#+BEGIN_warning
A ~boundary~ *cannot* be used to refer to an interface (/ie/ an inner set of
...
...
@@ -2592,7 +2628,7 @@ items).
Following the same idea, a ~zone~ is a descriptor of a set of cells. It
can be either defined by an integer or by a ~string~. Its meaning is
precise
d
when it is associated with a ~mesh~.
made
precise when it is associated with a ~mesh~.
***** ~mesh~
...
...
@@ -2707,8 +2743,7 @@ cartesian grid is aligned with the axis and made of identical cells.
The first two arguments are two opposite corners of the box (or of the
segment in 1d) and the list of natural integers (type ~(N)~) sets the
number of *cells* in each direction. Thus the size of the list of ~N~ is
$d$.
number of *cells* in each direction. Thus the size of the list ~N~ is $d$.
For instance one can write:
#+BEGIN_SRC pugs :exports both :results none
...
...
@@ -2918,7 +2953,7 @@ The ~mesh~ is represented in Figure [[fig:gmsh-hybrid-2d]].
#+RESULTS: median-dual-img
#+BEGIN_note
In ~pugs~, the storage mechanisms of median dual meshes follow
s
the same
In ~pugs~, the storage mechanisms of median dual meshes follow the same
rules as the diamond dual meshes. As long as the primary mesh lives
and as long as the median dual mesh is referred, it is kept in memory,
thus constructed only once.
...
...
@@ -2984,7 +3019,7 @@ write_mesh(gnuplot_writer("transformed"), m1);
#+BEGIN_note
One should keep in mind that the mesh produced by the ~transform~
function *shares* the same connectivity
than
the
given
mesh. This means
function *shares* the same connectivity
as
the
original
mesh. This means
that in ~pugs~ internals, there is only one connectivity object for
these two meshes.
#+END_note
...
...
@@ -3220,7 +3255,7 @@ operand.
****** ~R^1*Vh -> Vh~ and ~Vh*R^1 -> Vh~
These functions are defined for $\mathbb{P}_0(\mathbb{R}^1)$ data and the
return value is
also
a $\mathbb{P}_0(\mathbb{R})$ function.
return value is a $\mathbb{P}_0(\mathbb{R})$ function.
The following functions
- ~dot: Rˆ1*Vh -> Vh~
...
...
@@ -3229,7 +3264,7 @@ The following functions
****** ~R^2*Vh -> Vh~ and ~Vh*R^2 -> Vh~
These functions are defined for $\mathbb{P}_0(\mathbb{R}^2)$ data and the
return value is
also
a $\mathbb{P}_0(\mathbb{R})$ function.
return value is a $\mathbb{P}_0(\mathbb{R})$ function.
The following functions
- ~dot: Rˆ2*Vh -> Vh~
...
...
@@ -3238,7 +3273,7 @@ The following functions
****** ~R^3*Vh -> Vh~ and ~Vh*R^3 -> Vh~
These functions are defined for $\mathbb{P}_0(\mathbb{R}^3)$ data and the
return value is
also
a $\mathbb{P}_0(\mathbb{R})$ function.
return value is a $\mathbb{P}_0(\mathbb{R})$ function.
The following functions
- ~dot: Rˆ3*Vh -> Vh~
...
...
@@ -3449,11 +3484,11 @@ dimension 3.
****** ~interpolate: mesh*(zone)*Vh_type*(function) -> Vh~
This function
works exactly the same as
the previous function. The
additional
parameter, the ~zone~ list is used to define the cells where
the user
function (or the user function list) is interpolated. For
cells that
are not in the ~zone~ list, the discrete function is set to
the value
$0$.
This function
is similar to
the previous function. The
additional
parameter, the ~zone~ list is used to define the cells where
the user
function (or the user function list) is interpolated. For
cells that
are not in the ~zone~ list, the discrete function is set to
the value
$0$.
#+BEGIN_SRC pugs :exports both :results none
import mesh;
...
...
@@ -3512,8 +3547,8 @@ Let us consider the following example
let U:Vh, U = integrate(m, Gauss(5), u);
#+END_SRC
Here, for each cell $j$, the value of the discrete function
$\mathbf{
F
}_j$ is computed using a Gauss quadrature formula that is
exact for polynomials of degree $5$, $\mathbf{
F
}_j \approx\int_j
$\mathbf{
U
}_j$ is computed using a Gauss quadrature formula that is
exact for polynomials of degree $5$, $\mathbf{
U
}_j \approx\int_j
\mathbf{u}$. More details about quadrature formula will be given
below.
...
...
@@ -3535,7 +3570,7 @@ cells.
****** ~integrate: mesh*quadrature*Vh_type*(function) -> Vh~ <<integrate-P1-vector>>
This function behaves
the same
, the user function list size defines
This function behaves
similarly
, the user function list size defines
the dimension of the vector value of the produced
$\vec{\mathbb{P}}_0(\mathbb{R})$ discrete function. Actually the
~Vh_type~ parameter is there to allow the construction of
...
...
@@ -3667,7 +3702,7 @@ described in this section. These functions share some properties.
****** ~randomizeMesh: mesh*(boundary_condition) -> mesh~
This function creates a random mesh by
displac
ing the nodes of a given
This function creates a random mesh by
mov
ing the nodes of a given
~mesh~ and a list of ~boundary_condition~.
The supported boundary conditions are the following:
...
...
@@ -3680,10 +3715,10 @@ One should refer to the section [[boundary-condition-descriptor]] for a
documentation of the boundary condition descriptors.
#+BEGIN_note
Let us precise these boundary conditions behavior
Let us
make
precise these boundary conditions behavior
- In dimension 1, ~fixed~, ~axis~ and ~symmetry~ boundary conditions have
the same effect.
- In dimension 2, ~axis~ and ~symmetry~ behave
the same
. Thus, boundaries
- In dimension 2, ~axis~ and ~symmetry~ behave
similarly
. Thus, boundaries
supporting this kind of boundary conditions *must* form *straight*
lines.
- In dimension 3, boundaries describing ~axis~ conditions *must* be
...
...
@@ -3820,8 +3855,7 @@ functions may vary.
#+END_note
#+BEGIN_note
There a three kind of boundaries are supported by ~pugs~, boundaries
made
There are three kinds of boundaries supported by ~pugs~, boundaries made
- of sets of nodes,
- of sets of edges, or
- of sets of faces.
...
...
@@ -3838,8 +3872,8 @@ For instance, if an algorithm or a method requires a set of nodes to
set some numerical treatment, it can be deduced from a set of faces.
Obviously, these conversions can be meaningless, for instance, if one
expects a *line* in 3d, cannot be defined by a set of faces. ~pugs~
will
forbid this kind of conversion at runtime.
expects a *line* in 3d,
it
cannot be defined by a set of faces. ~pugs~
will
forbid this kind of conversion at runtime.
#+END_note
#+BEGIN_note
...
...
@@ -3888,7 +3922,8 @@ This function returns the quadrature descriptor associated to Gauss
formulas for the given ~N~.
In the following table, we summarize the *maximal degree* quadrature
that are available in ~pugs~ for various elements.
(exact for polynomials of a given degree) that are available in ~pugs~
for various elements.
| element type | max. degree |
|------------------------+-------------|
| segment | 23 |
...
...
@@ -3908,8 +3943,8 @@ degree given in argument.
The maximum allowed degree is 23.
For dimension
2 or 3
elements, Gauss-Legendre formulas are defined
by
tensorization. Conform transformations are used to map the cube
For
2 or 3-
dimension
al
elements, Gauss-Legendre formulas are defined
by
tensorization. Conform transformations are used to map the cube
$]-1,1[^d$ to supported elements.
****** ~GaussLobatto: N -> quadrature~
...
...
@@ -3919,9 +3954,9 @@ degree given in argument.
The maximum allowed degree is "only" 13.
For dimension
2 or 3
elements, Gauss-Lobatto formulas are defined by
tensorization. Conform transformations are used to map cube
$]-1,1[^d$
to supported elements.
For
2 or 3-
dimension
al
elements, Gauss-Lobatto formulas are defined by
tensorization. Conform transformations are used to map
the
cube
$]-1,1[^d$
to supported elements.
***** ~lagrangian: mesh*Vh -> Vh~
...
...
@@ -3995,7 +4030,7 @@ no effect.
***** Binary operators
The supported binary operators for ~
v
h~ data types are arithmetic
The supported binary operators for ~
V
h~ data types are arithmetic
operators.
#+begin_src latex :results drawer :exports results
...
...
@@ -4092,7 +4127,7 @@ Let us consider the following example
cout << "integral(uh) = " << integral_of_R(uh) << "\n";
cout << "integral(uh0) = " << integral_of_R(uh0) << "\n";
#+END_SRC
Here we sub
s
tract the mean value of a discrete function.
Here we subtract the mean value of a discrete function.
#+results: substract-mean-value-to-Vh
****** Additional ~*~ operators
...
...
@@ -4226,7 +4261,7 @@ different.
***** ~writer~
Variables of this type manage outputs: which format is used and
eventual
ly the writing policy. This policy sets for instance the time
possib
ly the writing policy. This policy sets for instance the time
period for time-dependent post processing.
**** ~writer~ provided functions
...
...
@@ -4463,9 +4498,9 @@ series.
#+BEGIN_note
The ~gnuplot~ writers are implemented in parallel.
The ~gnuplot~ post processing of produced files
is the same whichever is
the
number of processors (as soon as the saved data is also the same,
which is
warranti
ed by ~pugs~ for explicit methods).
The ~gnuplot~ post processing of produced files
does not depend on the
number of processors (as soon as the saved data is also the same,
which is
ensur
ed by ~pugs~ for explicit methods).
#+END_note
For an obvious practical reason, each ~gnuplot~ file starts with a
...
...
@@ -4488,11 +4523,11 @@ Here is an example of preamble of a produced ~gnuplot~ file.
****** ~gnuplot_1d_writer~ functions
This writer family makes
only
sense in 1d.
This writer family makes sense
only
in 1d.
#+BEGIN_note
In parallel, as soon as the saved data themselves are the same, the
~gnuplot_1d_writer~ generates *exactly* the same files (whichever
is
the
~gnuplot_1d_writer~ generates *exactly* the same files (whichever the
number of processors) since the coordinates of the post processed data
are sorted according to growing abscissa.
#+END_note
...
...
@@ -4571,8 +4606,8 @@ A typical use of this writer is the following.
******* ~gnuplot_1d_writer: string*R -> writer~ <<gnuplot-1d-series>>
This writer differs from the previous one by handling output
series. The real value argument defines the period
to respect
between
two
outputs. It can be viewed as
an
helper to outputs.
series. The real value argument defines the period between
two
outputs. It can be viewed as helper to outputs.
Let us give an example to fix ideas.
#+BEGIN_SRC pugs :exports both :results none
...
...
@@ -4626,9 +4661,9 @@ saving times:
These writers differ from the previous ones since it draws the cells
and affects the cell value to its nodes. This produces larger files
but allows 2d representations. Also, if the saved data is exactly
the
same in parallel, the order of the cells is generally different
since
they are written processor by processor.
but allows
for
2d representations. Also, if the saved data is exactly
the
same in parallel, the order of the cells is generally different
since
they are written processor by processor.
Additionally, this writer allows to write 2d meshes, see paragraph
[[write-mesh]].
...
...
@@ -4714,7 +4749,7 @@ The gnuplot result is displayed on Figure [[fig:writer-gp-2d-cos-sin]].
******* ~gnuplot_writer: string*R -> writer~
This is the time series function in the case of the ~gnuplot_writer~. It
behaves
the same as
[[gnuplot-1d-series]].
behaves
similarly to
[[gnuplot-1d-series]].
***** ~vtk~ writers
...
...
@@ -4722,7 +4757,7 @@ For more complex post processing (including 3d), ~pugs~ can generate ~vtk~
outputs.
The used format is one file in the ~vtu~ format for each parallel domain
(and
eventually
each time). The output is
done
using binary data for
(and each time). The output is
produced
using binary data for
performance reasons. For each time step a ~pvtu~ file is generated to
handle parallelism. And for a complete time series, a ~pvd~ file is
produced. This is the file that should be loaded.
...
...
@@ -4731,7 +4766,7 @@ Observe that each of these files (~vtu~, ~pvtu~ and ~pvd~) contains a
comment that stores the creation date and the version of ~pugs~ that was
run.
The use is exactly the same
than
for ~gnuplot~ writers so we do not
The use is exactly the same
as
for ~gnuplot~ writers so we do not
provide full examples.
~vtk~ writers are compatible with the ~write_mesh~ function, see paragraph
...
...
@@ -4745,10 +4780,9 @@ produced ~pvd~ file is built by adding ~.pvd~ to the provided ~string~.
****** ~vtk_writer: string*R -> writer~
This function follows the same rule. One just specifies the output
period. The generated ~pvd~ file is built
the same wa
y, one adds ~.pvd~ to
period. The generated ~pvd~ file is built
similarl
y, one adds ~.pvd~ to
the provided ~string~.
***** ~write~, ~force_write~ and ~write_mesh~ functions
Once a mesh writer has been defined, these functions are called to
...
...
@@ -4855,7 +4889,7 @@ share the same connectivity with the ~mesh~.
One probably noticed that using the ~write~ function with a time series
~writer~, last time of the calculation may not be written (see section
[[gnuplot-1d-series]]). The ~force_write~ function does not check that the
saving time has been reached. It
just
checks that the current time has
saving time has been reached. It
only
checks that the current time has
not already been saved.
Let us improve slightly the example given in section
...
...
@@ -4905,8 +4939,8 @@ Running this example produces the following files
#+RESULTS: ls-produced-gp-1d-series-force
One can see the additional file.
Each of these file contains the numerical solution at following
saving
times:
Each of these file
s
contains the numerical solution at following
saving
times:
#+NAME: times-in-gp-1d-series-force
#+BEGIN_SRC shell :exports results :results output
grep -n "# time = " gp_1d_exp_sin_force.*.gnu | cut -d '=' -f 2
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
sign in
to comment