I have been interested in the project of writing a parser that reads ASCII files for a long time, mostly for educational reasons. My idea was not to use tools like Bison or Yacc but to write a hand-written parser. I finally gave it a try with the objective of reading a file representing the description of a 3D scene (which is based on the USDA format, for those of you familiar with it).
Some context: A scene is made up of primitives (like a 3D mesh, a light, etc.) that are defined by a series of attributes. These attributes can be scalars or tuples, and an attribute can have an array of those or just a single element. Attributes are defined by their name (identifier), as well as their value type
, such as float
, float2
, float3
, or point3f
, for example, to represent a float scalar, a 2-tuple, or a 3-tuple float, etc. Of course, while I need to store the primitives, I also need to store the attributes that belong to a primitive, along with their name, value type, base type (int, float), and value(s).
As primitives can be nested, the prim_t
structure has a next
variable that stores a sibling in the primitive hierarchy. So siblings are stored as a linked list at the moment. Similarly, attributes belonging to a primitive are stored as a linked list.
Since this is the first time I’m doing this (and I wrote it in C rather than C++, which I know a bit better), I would really appreciate feedback from the community. Here are the points that I’m particularly interested in / less interested in:
Interested:
- I am interested in feedback about the scanner/lexer side of it. I have written down the rules following a free-context grammar syntax to serve as guidelines for implementing the flow the program is supposed to follow. Feedback on how characters are fetched, how tokens are composed or fetched, etc., would be great.
- I am also interested in how to bridge the value types read from the file (like
point3f
) to C types. I’ve chosen to declare a macro (see#define DATA_TYPES
) where I make an equivalence between the value type tokens and their C equivalent types, then build various arrays based on that to establish the bridge. Feedback on this approach would be great! Is it a good method? Could it be improved (which I’m sure it can)? - The attributes use a kind of type erasure. I don’t know how to do it differently. Is this the best way or the only way to do it? Can I improve the code here? I have been using a union to store attributes with a single element (which can be a tuple or a scalar) and a pointer to a data structure for arrays. Is this good practice? Can it be done differently?
Less Interested:
- I am not particularly interested in speed here. I am more focused on feedback regarding general code quality, with a focus on the two techniques I described earlier: the scanner and the best way to store the attributes.
- I know I can use
mmap
to make it faster, so no need to focus on that. - I am also less interested in error reporting for now. I’m aware that just calling
die
is not the best way to handle things and that the current implementation isn't production-ready in terms of error handling.
PS: I’ve done my best to remove the warnings, but I couldn’t completely get rid of unsafe buffer access
.
All feedback greatly appreciated and humbly taken.
Code:
/** * clang src/readgeo.c -Weverything -std=c11 -o build/readgeo.exe -O3 */ #pragma clang diagnostic ignored "-Wunsafe-buffer-usage" #define _CRT_SECURE_NO_WARNINGS #include <stdlib.h> #include <stdio.h> #include <assert.h> #include <ctype.h> #include <string.h> #include <stdbool.h> #include <stdint.h> #ifdef _WIN32 #include <Windows.h> #else #include <time.h> #include <sys/time.h> #endif #define TOKEN_MAX_LEN 256 typedef enum { BT_BOOL, BT_INT32, BT_FLOAT, BT_TOKEN, } BaseType; typedef enum { AT_NONE, AT_BOOL, AT_INT32, AT_FLOAT, AT_FLOAT2, AT_FLOAT3, AT_TOKEN, } AttributeType; typedef struct { size_t capacity; /* number of stored elements */ size_t size; /* max number of elements that can be stored */ BaseType type; /* base type: float, bool, uint32, ... */ uint32_t bytes; void* memory; } data_t; typedef struct attribute_t { char identifier[TOKEN_MAX_LEN]; AttributeType attr_type; BaseType base_type; struct attribute_t* next; union { bool b[4]; /* bool, bool2, bool2, bool4 */ int i[4]; /* int, int2, int3, int4 */ float f[4]; /* float, float2, float3, float4 */ }; data_t* data; } attribute_t; typedef struct prim_t { int schid; // schema id char identifier[TOKEN_MAX_LEN]; struct prim_t* parent; struct prim_t* next; /* child-sibling. Linked list */ attribute_t* attributes; } prim_t; static prim_t* root; static int ch; static FILE* file; static int line; static int col; static void inp(void) { ch = fgetc(file); if (ch == '\n') { line++; col = 0; } col = (ch == '\t') ? col + 4 : col + 1; } static void unget(void) { ungetc(ch, file); col--; } typedef enum { TOKEN_RESERVED, TOKEN_IDENTIFIER, TOKEN_NUMBER, TOKEN_LEFT_BRACE, TOKEN_RIGHT_BRACE, TOKEN_LEFT_BRACKET, TOKEN_RIGHT_BRACKET, TOKEN_LEFT_PARENTHESIS, TOKEN_RIGHT_PARENTHESIS, TOKEN_EQUAL, TOKEN_COMMMA, TOKEN_COLON, TOKEN_INT, TOKEN_FLOAT, TOKEN_UNKNOWN } TokenCode; typedef struct { char content[TOKEN_MAX_LEN]; TokenCode type; } token_t; #define MAX_TABS 16 static void free_alloc(prim_t* const prim, size_t l) { int nattr; char tabs[MAX_TABS]; attribute_t* attr; if (prim->next) free_alloc(prim->next, l+1L); assert(l < MAX_TABS); memset(tabs, '\t', l); tabs[(l == 0) ? 0 : l] = '\0'; printf("%s%s\n", tabs, prim->identifier); fflush(stdout); nattr = 0; attr = prim->attributes; while (attr != NULL) { nattr++; printf("%s%04d: %s\n", tabs, nattr, attr->identifier); fflush(stdout); if (attr->data != NULL) { if (attr->data->memory != NULL) free(attr->data->memory); memset(attr->data, 0x0, sizeof(data_t)); } attr = attr->next; } free(prim); } static _Noreturn void die(const char* err, int _line_) { fprintf(stderr, "%s (line: %d, col: %d, last char read: %c), at %d\n", err, line, col, ch, _line_); fclose(file); free_alloc(root, 0); abort(); } static token_t next_token(void) { token_t token; int tl; memset(token.content, 0x0, TOKEN_MAX_LEN); token.type = TOKEN_UNKNOWN; tl = 0; inp(); while (isspace(ch) || ch == '\n' || ch == '\t') inp(); while (ch == '#') { while (ch != '\n') { inp(); } return next_token(); } if (isalpha(ch)) { do { token.content[tl++] = (char)ch; assert(tl < TOKEN_MAX_LEN-1); inp(); } while (isalnum(ch)); if (ch != ' ') unget(); token.content[tl] = '\0'; token.type = TOKEN_RESERVED; } else if (ch == '"') { inp(); while (ch != '"' && ch != EOF) { if (!isalnum(ch) && ch != '_' && ch != ':' && ch != '!') die("error while reading identifier", __LINE__); token.content[tl++] = (char)ch; inp(); } token.content[tl] = '\0'; token.type = TOKEN_IDENTIFIER; inp(); if (ch != ' ') unget(); } else if (isdigit(ch) || ch == '-') { do { token.content[tl++] = (char)ch; inp(); } while (isdigit(ch) || ch == '.' || ch == 'e' || ch == '-'); if (ch != ' ') unget(); token.content[tl] = '\0'; token.type = TOKEN_NUMBER; } else if (ch == '{') token.type = TOKEN_LEFT_BRACE; else if (ch == '}') token.type = TOKEN_RIGHT_BRACE; else if (ch == '[') token.type = TOKEN_LEFT_BRACKET; else if (ch == ']') token.type = TOKEN_RIGHT_BRACKET; else if (ch == '(') token.type = TOKEN_LEFT_PARENTHESIS; else if (ch == ')') token.type = TOKEN_RIGHT_PARENTHESIS; else if (ch == '=') token.type = TOKEN_EQUAL; else if (ch == ',') token.type = TOKEN_COMMMA; else if (ch == ':') token.type = TOKEN_COLON; else {} return token; } static const char* spec_tokens[] = {NULL, "def", "over", "class", NULL}; static const char* schema_tokens[] = {NULL, "Mesh", "Camera", "Light", "Xform", "GeomSubset", NULL}; static const char* var_tokens[] = {NULL, "uniform", NULL}; /* variabiliy. Default to varying */ /* https://openusd.org/dev/api/_usd__page__datatypes.html */ /* Value Type Token | Base Type | Tuple Size */ #define DATA_TYPES \ X("bool", AT_BOOL, 1, bool) \ X("int", AT_INT32, 1, int32_t) \ X("float", AT_FLOAT, 1, float) \ X("float2", AT_FLOAT2, 2, float) \ X("float3", AT_FLOAT3, 3, float) \ X("double", AT_FLOAT, 1, float) \ X("double3", AT_FLOAT3, 3, float) \ X("point3f", AT_FLOAT3, 3, float) \ X("color3f", AT_FLOAT3, 3, float) \ X("normal3f", AT_FLOAT3, 3, float) \ X("texCoord2f", AT_FLOAT2, 2, float) \ X("token", AT_TOKEN, 1, char) static const char* value_type_tokens[] = { #define X(a, b, c, d) a, NULL, DATA_TYPES NULL #undef X }; static AttributeType attribute_types[] = { #define X(a, b, c, d) (b), 0, DATA_TYPES 0 #undef X }; static size_t tuple_sizes[] = { #define X(a, b, c, d) (c), AT_NONE, DATA_TYPES AT_NONE #undef X }; static int find(const char* token, const char* const tokens[]) { for (int i = 1; tokens[i] != NULL; ++i) { if (strcmp(token, tokens[i]) == 0) return i; } return 0; } /** * @todo padding here is probably a good idea? */ static void init_data(data_t** data, BaseType type) { *data = (void*)malloc(sizeof(data_t)); (*data)->capacity = 16; /* 16x int, or 16x (float, float, float) if vec3f for instance */ (*data)->size = 0; (*data)->type = type; (*data)->bytes = (type == BT_BOOL || type == BT_TOKEN) ? 1 : 4; (*data)->memory = (void*)malloc((*data)->bytes * (*data)->capacity); } static void push(data_t* const data, void* v) { void* temp; assert(data->capacity > 0); if (data->size + 1 > data->capacity) { temp = (void*)realloc(data->memory, data->capacity * 2 * data->bytes); if (temp == NULL) die("can't allocate memory", __LINE__); data->memory = temp; data->capacity *= 2; } memcpy((char*)data->memory + data->size * data->bytes, v, data->bytes); data->size++; } static void push_token(data_t* const data, const char* const token) { size_t size; void* temp; size_t req_capacity, new_capacity; assert(data->bytes == 1 && data->capacity > 0); size = strlen(token); req_capacity = data->size + size + 1; /* need space to store token + \0 */ if (req_capacity > data->capacity) { new_capacity = max(data->capacity * 2, req_capacity); temp = (void*)realloc(data->memory, new_capacity); if (temp == NULL) die("can't allocate memory", __LINE__); data->memory = temp; data->capacity = new_capacity; } memcpy((char*)data->memory + data->size, token, size); memset((char*)data->memory + data->size + size, '\0', 1); data->size += size + 1; } typedef void (*conv_and_push_func)(data_t*, const char* const); static void conv_push_bool(data_t* data, const char* const token) { bool b; assert(strlen(token) != 0 && (token[0] == '0' || token[0] == '1')); b = (token[0] == '0') ? false : true; push(data, &b); } static void conv_push_int(data_t* data, const char* const token) { int a; a = atoi(token); push(data, &a); } static void conv_push_float(data_t* data, const char* const token) { float f; f = strtof(token, NULL); push(data, &f); } static void read_array_scalar(data_t* data, BaseType type, conv_and_push_func conv_and_push) { token_t token; token = next_token(); if (token.type != TOKEN_LEFT_BRACKET) die("expected [ after = for arrays", __LINE__); while (1) { token = next_token(); if ((type < BT_TOKEN && token.type != TOKEN_NUMBER) || (type == BT_TOKEN && token.type != TOKEN_IDENTIFIER)) { die("expected a string or a number (bad formatting)", __LINE__); } conv_and_push(data, token.content); token = next_token(); if (token.type != TOKEN_COMMMA) break; } if (token.type != TOKEN_RIGHT_BRACKET) die("expected ]", __LINE__); } static void read_array_tuple(data_t* data, size_t tuple, conv_and_push_func conv_and_push) { token_t token; size_t i; token = next_token(); if (token.type != TOKEN_LEFT_BRACKET) die("expected [ after = for arrays", __LINE__); while (1) { token = next_token(); if (token.type != TOKEN_LEFT_PARENTHESIS) die("error while reading tuple", __LINE__); for (i = 0; i < tuple;) { token = next_token(); if (token.type != TOKEN_NUMBER) { die("expected a number (bad formatting)", __LINE__); } conv_and_push(data, token.content); token = next_token(); if (!(++i < tuple) ? token.type == TOKEN_COMMMA : token.type == TOKEN_RIGHT_PARENTHESIS) die("bad formating in tuple", __LINE__); } token = next_token(); if (token.type != TOKEN_COMMMA) break; } if (token.type != TOKEN_RIGHT_BRACKET) die("expected ]", __LINE__); } static data_t* read_attr_array(BaseType type, size_t tuple) { data_t* data; conv_and_push_func conv_and_push; init_data(&data, type); switch (type) { case BT_BOOL: conv_and_push = conv_push_bool; break; case BT_INT32: conv_and_push = conv_push_int; break; case BT_FLOAT: conv_and_push = conv_push_float; break; case BT_TOKEN: assert(tuple == 1); conv_and_push = push_token; break; } if (tuple == 1) read_array_scalar(data, type, conv_and_push); else read_array_tuple(data, tuple, conv_and_push); return data; } typedef void (*conv_and_store_func)(attribute_t* const, const char*, size_t); static void conv_and_store_bool(attribute_t* const attr, const char* token, size_t i) { assert(strlen(token) != 0 && (token[0] == '0' || token[0] == '1') && i < 4 && attr->base_type == BT_BOOL); attr->b[i] = (token[0] == '0') ? false : true; } static void conv_and_store_int(attribute_t* const attr, const char* token, size_t i) { assert(i < 4 && attr->base_type == BT_INT32); attr->i[i] = atoi(token); } static void conv_and_store_float(attribute_t* const attr, const char* token, size_t i) { assert(i < 4 && attr->base_type == BT_FLOAT); attr->f[i] = strtof(token, NULL); } static void read_attr_value(BaseType type, size_t tuple, attribute_t* const attr) { token_t token; conv_and_store_func conv_and_store; conv_and_store = NULL; switch(type) { case BT_BOOL: conv_and_store = conv_and_store_bool; break; case BT_INT32: conv_and_store = conv_and_store_int; break; case BT_FLOAT: conv_and_store = conv_and_store_float; break; case BT_TOKEN: break; } if (tuple == 1) { token = next_token(); if (type < BT_TOKEN) { if (token.type != TOKEN_NUMBER) die("expected a number (bad formatting)", __LINE__); assert(conv_and_store != NULL); conv_and_store(attr, token.content, 0); } else { if (token.type != TOKEN_IDENTIFIER) die("expected a token (bad formatting)", __LINE__); // NOT IMPLEMENTED strcpy(attr->token, token.content); } } else { assert(type != BT_TOKEN); /* tokens are not stored as tuples */ token = next_token(); if (token.type != TOKEN_LEFT_PARENTHESIS) die("error while reading tuple", __LINE__); for (size_t i = 0; i < tuple;) { token = next_token(); if (token.type != TOKEN_NUMBER) { die("expected a number (bad formatting)", __LINE__); } conv_and_store(attr, token.content, i); token = next_token(); if (!(++i < tuple) ? token.type == TOKEN_COMMMA : token.type == TOKEN_RIGHT_PARENTHESIS) die("bad formating in tuple", __LINE__); } } } static void read_prim(prim_t* const prim, size_t l); static void read_prim_body(prim_t* const prim, size_t l) { token_t token; prim_t* cur_child; attribute_t* cur_attr; prim_t** next_prim_ptr; /* char tabs[16]; assert(l <= 16L); memset(tabs, ' ', l * 2); tabs[(l == 0) ? 0 : l * 2] = '\0'; */ cur_attr = NULL; next_prim_ptr = NULL; cur_child = prim->next; assert(prim->attributes == NULL); while(1) { token = next_token(); assert(!feof(file)); /* nested prim definition */ if (find(token.content, spec_tokens)) { prim_t* child = (prim_t*)malloc(sizeof(prim_t)); memset(child, 0x0, sizeof(prim_t)); child->parent = prim; next_prim_ptr = (prim->next == NULL) ? &prim->next : &cur_child->next; *next_prim_ptr = child; cur_child = child; read_prim(child, l+1L); } /* end of block for current prim definition */ else if (token.type == TOKEN_RIGHT_BRACE) { break; } /* dictionary - optional */ else if (token.type == TOKEN_LEFT_PARENTHESIS) { do { inp(); } while (ch != ')'); } /* reading a prim's attribute */ else { int vti, vi; /* index into the value_type_tokens array, variability index */ bool is_array; char identifier[TOKEN_MAX_LEN]; int off; BaseType type; attribute_t* attr; /* attribute to be created */ attribute_t** next_attr_ptr; /* next pointer to assign new attribute to */ off = 0; is_array = false; next_attr_ptr = NULL; /* variability: optional */ if ((vi = find(token.content, var_tokens))) { token = next_token(); } /* type */ if (!(vti = find(token.content, value_type_tokens))) { die("expected type declaration", __LINE__); } token = next_token(); /* is array? */ if (token.type == TOKEN_LEFT_BRACKET) { token = next_token(); if (token.type != TOKEN_RIGHT_BRACKET) die("syntax error, missing closing brack \']\'", __LINE__); is_array = true; token = next_token(); } /* read attribute, optional `:` seperated words */ while(1) { assert(token.type == TOKEN_RESERVED && strlen(token.content) != 0); memcpy(identifier + off, token.content, strlen(token.content) + 1); /* include '\0' in copy */ off += strlen(token.content); token = next_token(); if (token.type != TOKEN_COLON) break; identifier[off++] = ':'; token = next_token(); } /* printf("%s * %s %s%s %s\n", tabs, (vi == 0 ? "\b" : var_tokens[vi]), value_type_tokens[vti], (is_array ? "[]" : ""), identifier ); */ /* equal */ if (token.type != TOKEN_EQUAL) { die("= expected after attribute declaration", __LINE__); } switch(attribute_types[vti]) { case AT_BOOL: type = BT_BOOL; break; case AT_INT32: type = BT_INT32; break; case AT_FLOAT: case AT_FLOAT2: case AT_FLOAT3: type = BT_FLOAT; break; case AT_TOKEN: type = BT_TOKEN; break; case AT_NONE: assert(false); __builtin_unreachable(); } attr = (attribute_t*)malloc(sizeof(attribute_t)); memset(attr, 0x0, sizeof(attribute_t)); strcpy((char*)attr->identifier, identifier); attr->base_type = type; //attr->value_type = next_attr_ptr = (prim->attributes == NULL) ? &prim->attributes : &cur_attr->next; assert((*next_attr_ptr) == NULL); *next_attr_ptr = attr; cur_attr = attr; /* reading data */ if (is_array) { attr->data = read_attr_array(type, tuple_sizes[vti]); } else { read_attr_value(type, tuple_sizes[vti], attr); } } } } static void read_prim(prim_t* const prim, size_t l) { token_t token; //char tabs[16]; int schid; token = next_token(); if (!(schid = find(token.content, schema_tokens))) { die("expected schemas", __LINE__); } token = next_token(); if (token.type != TOKEN_IDENTIFIER) { die("expected identifier", __LINE__); } strcpy((char*)prim->identifier, token.content); /* assert(l <= 16L); memset(tabs, ' ', l * 2); tabs[(l == 0) ? 0 : l * 2] = '\0'; printf("%s-%s/%s\n", tabs, schema_tokens[schid], prim->identifier); fflush(stdout); */ token = next_token(); if (token.type == TOKEN_LEFT_PARENTHESIS) { do { inp(); } while (ch != ')'); token = next_token(); } if (token.type != TOKEN_LEFT_BRACE) { die("expected {", __LINE__); } read_prim_body(prim, l); } int main(int argc, char** argv) { token_t token; double elapsed = 0.0; #ifdef _WIN32 LARGE_INTEGER frequency; // ticks per second LARGE_INTEGER start, end; #else long seconds, nanoseconds; struct timespec start, end; #endif line = 1; col = 0; if (argc-- > 1) { file = fopen(argv[1], "r"); } assert(file != NULL); root = (prim_t*)malloc(sizeof(prim_t)); assert(root != NULL); memset(root, 0x0, sizeof(prim_t)); strcpy((char*)root->identifier, "root"); /* Context Free Grammar for USDA file format: ========================================== S -> defBlock | overBlock | classBlock defBlock -> "def" schemas identifier '{' body '}' overBlock -> "over" schemas identifier '{' body '}' classBlock -> "class" schemas identifier '{' body '}' schemas -> "Mesh" | "Camera" | "Light" | "Xform" | ... identifier -> [a-zA-Z0-9_:]+ body -> (defBlock | overBlock | classBlock | statement)* statement -> attribute | otherPrims attribute -> [uniform] type array? identifier '=' value type -> "float" | "int" | "float2" | "float3" | ... array -> "[]" // Array specifier identifier -> [a-zA-Z_:][a-zA-Z0-9_:]* value -> singleValue | tupleValue | arrayValue singleValue -> number tupleValue -> '(' number (',' number)* ')' arrayValue -> '[' (singleValue | tupleValue) (',' (singleValue | tupleValue))* ']' otherPrims -> ... // Define other primitives here if needed number -> [0-9]+(\.[0-9]+)? // Basic number definition */ #ifdef _WIN32 QueryPerformanceFrequency(&frequency); QueryPerformanceCounter(&start); #else clock_gettime(CLOCK_MONOTONIC, &start); #endif while (1) { token = next_token(); if (feof(file)) break; if (!find(token.content, spec_tokens)) { die("expected specifier", __LINE__); } read_prim(root, 0); } #ifdef _WIN32 QueryPerformanceFrequency(&frequency); QueryPerformanceCounter(&end); // Calculate elapsed time in seconds elapsed = (double)(end.QuadPart - start.QuadPart) / (double)frequency.QuadPart; printf("Time taken: %f seconds\n", elapsed); #else clock_gettime(CLOCK_MONOTONIC, &end); seconds = end.tv_sec - start.tv_sec; nanoseconds = end.tv_nsec - start.tv_nsec; elapsed = seconds + nanoseconds * 1e-9; printf("Time taken: %f seconds\n", elapsed); #endif fclose(file); //die("test", __LINE__); free_alloc(root, 0); return 0; }
Small file for testing if you want to:
#usda 1.0 def Mesh "pCube1" ( kind = "component" ) { uniform bool doubleSided = 1 float3[] extent = [(-0.5, -0.5, -0.5), (0.5, 0.5, 0.5)] int[] faceVertexCounts = [4, 4, 4, 4, 4, 4] int[] faceVertexIndices = [0, 1, 3, 2, 2, 3, 5, 4, 4, 5, 7, 6, 6, 7, 1, 0, 1, 7, 5, 3, 6, 0, 2, 4] normal3f[] normals = [(0, 0, 1), (0, 0, 1), (0, 0, 1), (0, 0, 1), (0, 1, 0), (0, 1, 0), (0, 1, 0), (0, 1, 0), (0, 0, -1), (0, 0, -1), (0, 0, -1), (0, 0, -1), (0, -1, 0), (0, -1, 0), (0, -1, 0), (0, -1, 0), (1, 0, 0), (1, 0, 0), (1, 0, 0), (1, 0, 0), (-1, 0, 0), (-1, 0, 0), (-1, 0, 0), (-1, 0, 0)] ( interpolation = "faceVarying" ) point3f[] points = [(-0.5, -0.5, 0.5), (0.5, -0.5, 0.5), (-0.5, 0.5, 0.5), (0.5, 0.5, 0.5), (-0.5, 0.5, -0.5), (0.5, 0.5, -0.5), (-0.5, -0.5, -0.5), (0.5, -0.5, -0.5)] color3f[] primvars:displayColor = [(0.13320851, 0.13320851, 0.13320851)] ( customData = { dictionary Maya = { bool generated = 1 } } ) texCoord2f[] primvars:st = [(0.375, 0), (0.625, 0), (0.375, 0.25), (0.625, 0.25), (0.375, 0.5), (0.625, 0.5), (0.375, 0.75), (0.625, 0.75), (0.375, 1), (0.625, 1), (0.875, 0), (0.875, 0.25), (0.125, 0), (0.125, 0.25)] ( customData = { dictionary Maya = { token name = "map1" } } interpolation = "faceVarying" ) int[] primvars:st:indices = [0, 1, 3, 2, 2, 3, 5, 4, 4, 5, 7, 6, 6, 7, 9, 8, 1, 10, 11, 3, 12, 0, 2, 13] uniform token subdivisionScheme = "none" def GeomSubset "back" { uniform token elementType = "face" uniform token familyName = "componentTag" int[] indices = [2] } def GeomSubset "bottom" { uniform token elementType = "face" uniform token familyName = "componentTag" int[] indices = [3] } def GeomSubset "front" { uniform token elementType = "face" uniform token familyName = "componentTag" int[] indices = [0] } def GeomSubset "left" { uniform token elementType = "face" uniform token familyName = "componentTag" int[] indices = [5] } def GeomSubset "right" { uniform token elementType = "face" uniform token familyName = "componentTag" int[] indices = [4] } def GeomSubset "top" { uniform token elementType = "face" uniform token familyName = "componentTag" int[] indices = [1] } }
(*data)->bytes = (type == BT_BOOL || type == BT_TOKEN) ? 1 : 4;
looks like code assumesbool
has size 1. Do you want code to work whenbool
is larger?\$\endgroup\$