| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Especially for short descriptions, it is annoying to have to type \brief for
every single API doc.
Drop all \brief and enable the AUTOBRIEF feature of doxygen, which always takes
the first sentence of an API doc as the brief description.
Change-Id: I11a8a821b065a128108641a2a63fb5a2b1916e87
|
|
|
|
|
|
|
| |
Previously, this would fail when generating to $builddir if that subtree did
not exist yet in $builddir.
Change-Id: Ia4fba96dcf74a25cf3e515eb3e4f970e0c3cdd54
|
|
|
|
| |
Change-Id: Iae830d716f01810972edbef14fc5383ac647d0ea
|
|
|
|
| |
Change-Id: Ie10c47ee952f253b1ba77ecf6e79f2c033545bc1
|
|
|
|
|
|
|
|
|
|
| |
This change makes the conv_gen application more interactive
and flexible, allowing to generate not only code definitions
but also the test vectors and header files in the future.
Moreover, it becomes possible to select exact code family,
such as GSM, GMR etc.
Change-Id: I0b476b00234c17f78b41d695cf3bfd13edb64c28
|
|
|
|
|
|
|
|
|
| |
This change separates the convolutional code definitions from the code
generator logic, allowing us to make further changes in more specific
way. For example, adding some new codes, you change the conv_codes.py
only because such change isn't related to the generator.
Change-Id: I3428561251b7d7a180d1e9b6fcaad50bdbbc37fa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change introduces the memory usage optimization, mentioned
in d2d9760c08f35a231d32f0ebeb73b2927e5573b3. The aim is to make
code generator able to detect, whether the same tables are used
by several convolutional code definitions, and prevent one from
writing these tables multiple times.
For now, the detection process isn't fully automatic, so all
shared polynomials should be placed inside the 'shared_polys'
dictionary, for example:
shared_polys = {
"xcch" : [
( G0, 1 ),
( G1, 1 ),
],
"mcs" : [
( G4, 1 ),
( G7, 1 ),
( G5, 1 ),
],
}
Change-Id: I84760f5cdfdaece376b801d2e6cb2954ee875a3b
|
|
|
|
|
|
|
|
| |
This change finally makes the script able to be executed
in Python 3 environment. Due to new Python 3 restrictions,
the reduce() should be imported explicitly.
Change-Id: Icbc81c29f1a226aeed2c1245a5d60809fe124005
|
|
|
|
|
|
|
| |
This is mostly a code style change, but it also
increases the compatibility with Python 3.
Change-Id: I5c8271d973f766aeb9cbcab30c4eddfdab54fcbb
|
|
|
|
| |
Change-Id: Ie1452342f524a8b60f2babc07398a1d9c9e06aa3
|
|
|
|
| |
Change-Id: I3327b92715744af4ef61496ef0121555d9d24799
|
|
|
|
| |
Change-Id: I0ea7151f4e8119a8798a9e129b951559e56b0d93
|
|
|
|
|
|
|
|
|
|
|
| |
To keep the generated tables readable, line with should be limited.
So, now there are the following limitations:
- _print_term(): up to 12 numbers per line,
- _print_puncture(): up to 12 numbers per line,
- _print_x(): up to 4 blocks per line.
Change-Id: I95256c4ad402a3c088bdb6c5a5cda8b17c31881c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of generating every convolutional code into a separate
file (such as conv_xcch_gen.c, conv_cs3_gen.c), it is better to
have a single file, containing all definitions, because as many
convolutional codes we add, as many entries we will have to add
into 'src/gsm/Makefile.am'. This approach increases readability
of the Makefile.am, and also makes us able to share some data
between some convolutional code definitions.
For example: xCCH, RACH, SCH, TCH/F, both CS2 and CS3 may use
the same *_state[][2] and *_output[][2] arrays within a single
file. This optimization is currently WIP.
Change-Id: Ib4e4ee5fdde38429e68e3b2fa50ec03a18f59daa
|
|
|
|
| |
Change-Id: I8550910b9f5c16efc6f15f23c7ee52122c588752
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The script does not work with python3:
$ python3 utils/conv_gen.py
File "utils/conv_gen.py", line 124
def _print_term(self, fi, num_states, pack = False):
Second there is no 'python' on FreeBSD and one needs to select
the major version to use.
GEN conv_cs3_gen.c
GEN conv_xcch_gen.c
GEN conv_cs2_gen.c
python: not found
python: not found
python: not found
By using python2 we solve both issues. On Debian python2 is located
inside the python-minimal package.
|
|
|