# for building detectron conda install -c conda-forge gcc=11.2.0 conda install -c conda-forge gxx=11.2.0
conda env config vars set LD_LIBRARY_PATH="/home/cyl/miniconda3/envs/fcsgg/lib/" conda env config vars set CPATH="/home/cyl/miniconda3/envs/fcsgg/include/" conda env config vars set CUDA_HOME="/home/cyl/miniconda3/envs/fcsgg/"
File "/home/cyl/Reconst/fcsgg/fcsgg/data/detection_utils.py", line 432, in generate_score_map masked_fmap = torch.max(masked_fmap, gaussian_mask * k) RuntimeError: The size of tensor a (55) must match the size of tensor b (56) at non-singleton dimension 1
If we go to qubits, not much in this picture changes. While a qubit has infinitely many possible states, it turns out that you should look at what is called the basis of the state space, which loosely said means that you should find the minimal number of states in which you can express every other state. For a qubit, this turns out to be two, for example the up state and the down state. To use the language from above, each qubit therefore has 2 ‘possible assignments’, and you have n of them, so by the arguments presented above, there are $2^ⁿ$ unique states. Because we are doing quantum mechanics, superpositions of these states are also allowed, but that doesn’t change the picture: the dimensionality of the system is still $2^ⁿ$.
Qubits是由单个光子的量子态决定的,的存储维数限制依然是 $2^n$
States for qumodes
相较于qubit,qumode针对的是一个光场的状态,理论上可以有无限state
Squeezing gates
Squeezing gates on the vacuum state generate different states in the qumodes
S2 gate generates the Two-mode Squeezed Vacuum (TMSV) state when applied to the vacuum state |0, 0⟩ which can be mathematically expressed as
raw data : for each subjects(S1,S2 …) , each action(walking, waiting, smoking …), each sub sequence(1/2): $(n) \times 99$ (np.ndarray, float32)
From data_utils.load_data() used by translate.read_all_data()
train data: the composed dictionary ((suject_id, action, subaction_id, ‘even’) as key) of raw data (just even rows), with one hot encoding columns for action type, if action is specified (normal case), just append an all 1 column to rawdata. Size of each dictionary value: $(n/2) \times (99 + actions;count)$
complete data: all data joint together, from different subjects, actions, sub sequences: $(n) \times 99$
From translate.read_all_data() used by translate.train()
train set : normalized train data, throw out data with $std < 1e-4$ (accroding to complete data). Size of each dictionary value: $(n/2) \times ((99-used;dimension;count) + actions;count)$
Human Dimension
After the analyzztion of the complete data, human dimension has been fixed to $54$.
From Seq2SeqModel.get_batch() used by translate.train()