Generic Container to Native Container¶
What¶
Generic containers such as List<T>
, T[]
, Dictionary<T>
and HashSet<T>
are not Burst compatible.
Why¶
Refactor these containers make the code closer to being Burst compatible and possible to apply further optimizations.
How¶
Convert the element type¶
First we need to convert the element type T
to struct and using only Burst compatible fields. Refer to High Performance C# for more information.
Allocation type¶
Native container doesn't have automatic garbabge collection similar to generic containers. You would have to decide its life plan using appropriate allocation type:
- Allocator.Temp
: suitable for sequential processing. Is automatically disposed at the end of the frame. You could also manually dispose it earlier. This is the fastest allocation among these types.
- Allocator.TempJob
: suitable for use in Jobs. You could pass a job handle to the Dispose
function and it will be disposed after the relevant job is finished.
- Allocator.Permanent
: suitable for use as fields in System. You could dispose them in System OnDestroy
. This is the slowest allocation among these types.
List to NativeList¶
// before
var list = new List<T>;
list.Add(new T());
// after
var list = new NativeList<T>(Allocator.Temp);
list.Add(new T());
...
list.Dispose();
Array to NativeArray¶
// before
var array = new T[5];
array[1] = new T();
// after
var array = new NativeArray<T>(5, Allocator.Temp);
array[1] = new T();
...
array.Dispose();
NativeList<T>
can be recast to NativeArray
without any performance hit, because it uses array internally:
But if you use NativeList.ToArray
it will return a copy and you will have to do another allocation.
Query result could be copied to an array conveniently via:
var query = SystemAPI.QueryBuilder().WithAll<Comp1, Comp2>.Build();
var entities = query.ToEntityArray(Allocator.Temp);
var comp1s = query.ToComponentDataArray<Comp1>(Allocator.Temp);
HashSet and Dictionary¶
Both NativeHashSet
and NativeHashMap
need initial_capacity
in constructor, estimate this number so that your container doesn't have to reallocate very frequently. By default the capacity will grow logarithmically (log2).
// before
var set = new HashSet<T>();
// after
var set = new NativeHashSet<K, T>(initial_capacity, Allocator.Temp);
// before
var dict = new Dictionary<K, T>();
// after
var dict = new NativeHashMap<K, T>(initial_capacity, Allocator.Temp);
Nested Container¶
Only Dictionary
of List
is supported. The other nesting type need a deeper refactoring.
// before
var dictOfList = new Dictionary<K, List<T>>();
dictOfList[key1].Add(val1);
dictOfList[key1].Add(val2);
dictOfList.Remove(key2);
dictOfList[key1].Remove(val1);
foreach (var val in dictOfList[key1]) Debug.Log(val);
// after
var dictOfList = new NativeParallelMultiHashMap<K, T>(initial_capacity, Allocator.Temp);
dictOfList.Add(key1, val1);
dictOfList.Add(key1, val2);
dictOfList.Remove(key2);
dictOfList.Remove(key1, val1);
foreach (var val in dictOfList.GetValuesForKey(key1)) Debug.Log(val);
Parallel Writer¶
These native containers support writing in Job. We just need to acquire a ParallelWriter
to the container
internal struct Job: IJobParallelFor {
public NativeList<T>.ParallelWriter Data;
public void Execute(T index) {
Data.AddNoResize(new T());
}
}
...
var list = new NativeList<T>(Allocator.TempJob);
...
var handle = new Job {
Data = list.AsParallelWriter()
}.ScheduleParallel();
list.Dispose(handle);
NativeHashSet
and NativeHashMap
can't be converted to ParallelWriter
, we must use its parallel version instead
- NativeParallelHashSet
- NativeParallelHashMap