Introduction
With the advent of the big data era, it is urgently forced to develop integrated miniaturized storage device with high density and ultra-large capacity.[] Among plentiful storage modes, the opto-magnetization recording[] has attracted increasing interest due to its exclusive superiorities consisting of high density, efficient energy consumption, repeated erasure, and ultra-fast responding. Since the discovery of the inverse Faraday effect (IFE)[] in the nonabsorbing magneto-optic (MO) film is revealed in both the theory and experiment, the opto-magnetization recording has emerged as a competitive alternative for the next-generation storage technology.
In the next few decades, numerous research works are focused on the accelerating development of opto-magnetization recording. To facilitate the potential applications in all-optical magnetic recording,[] magnetic resonance microscopy,[] atom trapping,[] holography[] and manipulation of the spin wave,[] the super-resolved magnetization structures such as a single ellipsoidal or spherical magnetization spot,[] magnetization needle,[] magnetization spot arrays,[] magnetization chain,[] magnetic vortex core,[] and twisted longitudinal magnetization texture[] are garnered by tightly focusing the incident light fields tailored with amplitude, phase, and polarization modulations. The magnetization domain of these versatile magnetization structures is still inferior to the ultimate limit of [] (NA is the numerical aperture of the objective lens), which degrades the ultra-high density storage. To alleviate such a deficiency, the longitudinal magnetization superoscillation hotspot () is achieved by the tightly focusing of an azimuthally polarized high-order Laguerre–Gaussian vortex beam.[] In addition to the elementary engineering of magnetization structure, tailoring multifarious magnetization orientation is manifested as an additional reconfigurable dimension, thereby contributing to the further reinforcement of magnetic memory capacity. Together, a plethora of magnetization structures with certain magnetization orientations, including a single in-plane magnetization spot,[] transverse magnetization spot arrays,[] magnetization spot or needle (arrays) with steerable 3D) orientation[] and twist-controllable magnetization orientation[] have been accessible by tightly focusing the encoded vectorial beams onto the isotropic MO film. To achieve such magnetization structures with diverse orientations, the designing manners such as the reversing electric dipoles,[] raytracing models (RMs),[] and axially symmetrical destruction[] are implemented.
Unfortunately, the employed methods lack considerable flexibilities and high efficiencies. In fact, tailoring the incident beam is a representative inverse design issue to attain the prescribed magnetization distribution. The machine or deep learning driven by data has exhibited an unparalleled competence in the space weather forecast,[] image completion[] as well as ciphertext-only attack.[] Aiming at a similar challenge, machine learning inverse design has been demonstrated to be an accurate and time-efficient approach in 3D vectorial holography,[] adaptive optics,[] optical information storage,[] and strcucture design of metaphotonics.[] Lately, we succeed to apply this method to garner the expected magnetization spot with tunable 3D orientation.[] This pathway has been proven to occupy exclusive features containing little time consumption, good flexibility, and high accuracy.
Despite the tremendous endeavors, those well-defined avenues suffer from two possible limitations hindering the availability of multidimensional high-density opto-magnetization storage. First, the magnetization magnitude is an additional degree of freedom in storage dimension and is rarely exploited in the previous magnetization shapings. Second, the traditional machine learning inverse design is still time consuming and needs massive data in the training phase. For the first point, we introduce the magnetization magnitude as an additional dimension to further reinforce the capacity of the opto-magnetization storage. With respect to the second point, we adopt the novel physics-enhanced deep learning[] instead of the traditional machine learning to attain fast training and the usage of less data. Together with these two tactics, we propose 5D opto-magnetization recording with a 3D spatial situation, vectorial orientation, and magnitude enabled by physics-enhanced deep learning.
Schematic Illustration to Yield Multidimensional Opto-Magnetization
The devised scheme based on RMs for tight focusing to produce opto-magnetization spot with controllable 3D orientation and arbitrary magnitude is illustrated in Figure . The incident beam is radially polarized and then focused to produce a longitudinally polarized component () by a single high NA lens, as shown in Figure . The incident light is radially polarized beam imposed with π-phase-step filter[] along the x-axis and π/2 phase delay, as depicted in Figure . Here, “sgn” is a mathematical symbolic function and sgn(x) takes the role of π-phase-step filter along the x-axis. Due to the constructive interference of the transverse polarized component () and the destructive interferences of the other two polarized components ( and ), a polarized component along the x-axis () can be produced at the focus. An additional π/2 phase delay is introduced as the imaginary part j in the focused field . As sketched in Figure , the radially polarized beam modulated by the π-phase-step filter along the y-axis is focused in a single high NA objective lens. Here, sgn(y) takes the role of the π-phase-step filter along the y-axis. A polarized component along the y-axis () is generated at the focus owing to the transverse polarized component () and the destructive interferences of the other two polarized components ( and ). A little different from Figure , an additional π/2 phase delay is introduced in the focused field in Figure . According to the IFE in the isotropic magnetically ordered MO film, the light-induced magnetization is proportional to the vector product between two complex conjugate electric fields.[] The phase difference between two orthogonally polarized electric fields is π/2, the light-induced magnetization along the orthogonal direction with respect to two polarized electric fields can energetically emerge. This is the reason why we introduce an additional π/2 phase delay in Figure . When the focused electric fields in Figure are combined, the magnetization component along the y-axis () is generated, as shown in Figure . Likewise, the magnetization components along the z-axis () and the x-axis () are produced in Figure and Figure , respectively. In principle, the magnetization with controllable 3D orientation and arbitrary magnitude can be obtained by steering the weights of magnetization components (, , ). The magnetization field varies in the unit sphere when the calculated magnetization magnitudes are normalized, as depicted in Figure . Reversely, the weights of magnetization components (, , ) are determined by the amplitude factors of incident beams in Figure . More immediately, the opto-magnetization spot with arbitrary orientation and magnitude can be governed by the amplitude factors of four kinds of incident beams.
[IMAGE OMITTED. SEE PDF]
The quantitative analysis of creating light-induced magnetization spot with arbitrary orientation and magnitude is theoretically given. The magnetization distribution should be calculated by Richards and Wolf diffraction theory and the IFE. According to the description in Figure , the final incident beam is the summation of four kinds of incident beams with different amplitude factors. Thus, the final incident beam is expressed as
Here, , , , and denote the four kinds of incident beams with different amplitude factors (, , , and ). We assume that describes the distribution of Bessel–Gaussian characterized by .[] indicates the ratio between the pupil radius and the beam waist, and is the maximum divergence angle determined by the NA of the objective lens. For simplicity in the next, we use the expression as . Under the tight focusing condition, the focal electric field can be given by[]
Through the analysis and discussion, we can obtain opto-magnetization with arbitrary orientation and magnitude by tuning the relevant parameters of the incident beam. Given the proper incident beam, the opto-magnetization can be calculated by the vector diffraction theory and the IFE in the MO film. To achieve the prescribed opto-magnetization, reversely, what the incident beam is. This is an inverse problem as proposed in our previous work. In that work, this challenge is successfully solved by the machine learning method. However, the devised machine learning model is a completely black box with a few drawbacks. First, a large majority of training data are used (200 000). It needs a large amount of time to accumulate such a lot of samples in advance before training. Second, the training process needs multitudinous epochs (at least a thousand), and is quite time-consuming. Last but not the least, the machine learning model is purely mathematical operations without any physical interpretability.
Design of Physics Enhanced Deep Learning in Opto-Magnetization Shaping
To circumvent the deficiencies in traditional machine learning, we devise a novel opto-magnetization neural network (opto-magnetizationNet) by adopting the core ideal of physics-enhanced deep neural network widely implemented in phase imaging,[] ghost imaging,[] and diffractive imaging.[] The basic architecture is schematically outlined in Figure .
[IMAGE OMITTED. SEE PDF]
The first step is to obtain sufficient datasets of the magnetizations. It is assumed that the amplitude factors in the incident beam meet . And only three amplitude factors are completely independent. Here, we assume , , and as independent amplitude factors to represent the distribution of the incident beam. The values of , , and are varied to acquire different incident beams. Based on the vector diffraction theory and the IFE in the MO film, the corresponding light-induced magnetizations defined by at the focus are attainable.
The pathway of the inverse design is employed in our opto-magnetizationNet, which contains two parts. The first part is the multilayer perceptron (MLP) architecture. And the second part is the physical model represented by H to describe the opto-magnetization process under the tight focusing condition. In the MLP model, the magnetization distributions , , and are used as input layers and the incident beams are reflected by , , and are regarded as output layers. Many hidden layers are contained in the middle of this model. Through the MLP model, the estimate of the incident beam can be obtained. The amplitude factors of the estimated incident beam are denoted by , , and . According to the physical model H, the corresponding estimated magnetizations represented by are calculated. Then the error between and M is used to optimize as well as update the weights and biases of the MLP model via gradient descent. When the iterative process proceeds, the calculated magnetization will be forced to converge to the original real magnetization M. Throughout this training process, the estimated amplitude factors of the incident beam will converge to a feasible solution. Therefore, the optimized incident beam is obtained for the prescribed magnetization by our designing opto-magnetizationNet.
Simulated Results Based On Physics-Enhanced Deep Learning
The MLP model in our devised opto-magnetizationNet is composed of an input layer with three input elements, an output layer with three output elements, and some hidden layers with a certain amount of neurons. For this problem, eight hidden layers are established. In our scene, the input vector is a three-element array containing the magnetization components along the x-axis, y-axis, and z-axis (, , and ). These three magnetization components are normalized to the maximal magnetization magnitude before feeding into the MLP model. The output of the MLP model is another three-element array represented by amplitude factors (, and ) of the incident beam. A rectified linear unit is chosen as the activation function in each hidden layer, due to the rectified linear unit is fit for the input values ranging from 0 to 1. Before the rectified linear unit, the batch normalization layer is used to normalize the output value to be in the range of [0,1]. This distinct operation will make the input values activate in the active zone of the rectified linear unit. The number of neurons in the front four hidden layers remained equal to 512. While for the next four hidden layers, these values are chosen to successively decrease by half, and set as 256, 128, 64, and 32. On account of this typical regression problem, the mean square error (MSE) is selected as the loss function. To better optimize the weights for the opto-magnetizationNet, the Adam as a typical robust optimizer is employed.
According to the vector diffraction theory under the tight focusing condition and the IFE in the MO film, the light-induced magnetizations for 500 000 incident beam samples are calculated and accumulated. These data samples are split into two sub-datasets containing training (400 000) and testing (100 000) by 4:1. The testing dataset is completely blind and not used in the training. As it is expected, the training opto-magnetizationNet is capable of predicting the value for the blind test data. In the training phase, some hyperparameters consisting of batch size, learning rate are needed to be delicately regulated. To effectively alleviate the disappearance or explosion of the gradient value in the training process, a variance scaling method called Xavier initialization in the initial neural network is exploited. Notably, all the training operations were conducted on a supercomputer node with inter(R) Xeon(R) CPU E5-2682 V4, 16 GB of RAM, and NVIDIA Tesla V100 GPU. The program is implemented using Python 3.6 and Pytorch in Ubuntu 18.04 operating system.
Training: a Single Input Sample Versus Whole Input Samples
To demonstrate the high efficiency in the time aspect, the training outcomes for a single input sample and whole input samples (training dataset containing 400 000 groups of samples) are plotted in Figure .
[IMAGE OMITTED. SEE PDF]
For the scene of the single input sample, the batch size and initial learning rate are chosen as 1 and 0.001, respectively. In the training of the whole input samples, the batch size is set to be 1000 and the initial learning rate is assumed to be 0.0001. Apparently, the learning rates for the single input sample and whole input samples are quite different. The learning rate for the single input sample is adjusted a little bigger, due to the sharp variation of loss compared with the whole input samples. For training whole input samples, the variation of the loss is a little low because of the average result of a great deal of training data.
In Figure , the loss represented by MSE is illustrated when the number of iterations denoted by the epoch increases. The loss value for the training of the single input sample fastly declines with increasing the iteration in the range of [0,100]. Subsequently, the loss value decreases at a quite low speed and converges to a stable value of 0.002%. Likewise, the loss for training the whole input samples quickly decreases in the first ten epochs and converges to a little larger value of 0.02%. Notably, the spending time in one training epoch for the single input sample and whole input samples is completely distinct because of different amounts of feeding data. The estimated converging time for these two kinds of input samples is 13 and 613 s, respectively. Whereas, the number of the training dataset in the whole input samples is 400 000. If the whole input samples are trained one by one as if training the single input sample, the total training time is 13 × 400 000 = 5 200 000 s. The rate between these two values (5 200 000 and 613) is reached up to 8482.9, which means the promotion of more than three orders of magnitude in the training of whole input samples. This ultra-high acceleration reveals the exclusive superiority of powerful parallel computing in the training of big data.
Comparison with Other General Neural Networks
Except for the proposed opto-magnetizationNet, there are many other neural networks used for solving optimization problems. The single MLP architecture and tandem neural model are two typical designing frameworks. We compare the opto-magnetizationNet to those two neural networks, as well as analyze and discuss the training performance of each neural network as shown in Figure .
[IMAGE OMITTED. SEE PDF]
For the MLP theme, the optimizing target depends on the loss calculated by the real amplitude factors and estimated ones of the incident beam. For comparison purposes, the criterion of the loss between the MLP theme and the opto-magnetizationNet is needed to be consistent. The loss between the real magnetization component and the estimated one is obtained as plotted in Figure , when the predicted amplitude factors of the incident beam are regarded as the input layer in the opto-magnetization process (H). Actually, the architcture of the single MLP neural network is the same as the MLP model in the opto-magnetizationNet. Moreover, the values of the corresponding hyperparameters are also identical. In the tandem neural network, the idea is to first train a forward modeling network mapping the opto-magnetization process from the incident beam to the magnetization field. This pretrained forward network is then connected to the output of the MLP neural network, with the prediction error between the real magnetization components and the estimated ones serving as the supervision signal. In fact, the pretrained forward network is a similar MLP neural network. Except for no batch normalization layers, the architectures and the values of hyperparameters are the same as the single MLP neural network.
In Figure , the loss values on the training and testing datasets for the opto-magnetizationNet, single MLP model, and tandem neural network are plotted in the training process. Obviously, the variations of the loss values on the training and testing datasets for the opto-magnetizationNet and tandem neural network are nearly the same. Their loss values quickly decrease in the range of [0,50] epochs. The final loss values converge to a small numerical value of about 0.02%. In comparison, the loss for the single MLP neural network decreases at a slower speed. Besides, the converging value of the loss for this architecture is in the vicinity of 2.3%. Compared with the opto-magnetizationNet and tandem neural network, the converging loss is slightly larger, which indicates the worse predicted performance of this model. The converging time in the training for the opto-magnetizationNet, single MLP model, and tandem neural network are estimated and these values are 613, 4350, and 2173 s, respectively. As seen from the converging time for these three neural networks, the opto-magnetizationNet is the most efficient and least time-consuming.
To further demonstrate the performances of three kinds of neural networks, the accuracies of magnetization orientations on the training and testing datasets are depicted in Figures and , respectively.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
Especially, the accuracy along the magnetization orientation in Figure is calculated by the following evaluation criteria. The real magnetization orientation and the predicted one are two vectors. To estimate the accuracy of the predicted magnetization orientation, the direction cosine between them represented by is employed. Here, M and are two vectors representing the real and predicted magnetization vectors. If the angle between these two vectors is equal to 0, the direction cosine is 1. In another word, the directions of these two vectors are identical and the accuracy of the predicted magnetization orientation is equal to 1. Inversely, the direction cosine is 0 as the angle between these two vectors is equal to 90 degrees. In this case, these two vectors are perpendicular to each other and the accuracy of the predicted magnetization orientation is 0. The accuracy along the magnetization orientation for the opto-magnetizationNet occupies first place, that for the tandem neural network comes second and that for the single MLP model is the lowest in the training range of [0,100] epochs. The final accuracies for the opto-magnetizationNet and the tandem neural network are almost identical, which are about 0.99 approaching 1. While for the single MLP neural network, the final accuracy is only about 0.9. By comparison, the accuracy of the predicted magnetization orientation for the opto-magnetizationNet is the highest in the whole training phase.
In addition, the accuracies of the predicted magnetization magnitudes for these three kinds of neural networks are revealed in Figure .
The accuracy of the predicted magnetization magnitude is evaluated as . When the real magnetization magnitude and predicted one are identical, the accuracy of the predicted magnetization magnitude is equal to 1. In other words, the accuracy of the predicted magnetization magnitude is approaching 1. Apparently, there is a trivial difference in the accuracy between them. The final accuracy of the predicted magnetization magnitude for the opto-magnetizationNet is about 99.5%. And for the tandem neural network, the accuracy is about 97%, which is a bit lower than that for the opto-magnetizationNet. While for the MLP model, the final accuracy only reached 60%, which is far lower than the ones for the other two neural networks.
In the next, the amount of the input samples for the training is gradually compressed. Then we investigate the performance of opto-magnetizationNet by applying the compressed the amount of input samples, as shown in Figure .
[IMAGE OMITTED. SEE PDF]
The loss values on the testing dataset for different amounts of input samples containing 500 000, 50 000, 5,000, 500, 50, and 5 are plotted in the training process. Analogously, all the input samples are divided into training and testing datasets by 4:1. When the amount of the input samples is compressed ranging from 500 000 to 50, the loss values decrease at a gradually lowered speed. Finally, the converged loss values are distributed nearby 0.3%. But when the input samples are chosen as only 5 numbers, the loss value can not converge. Further, the loss value, in this case, is a little larger (about 0.09) in one training. Besides, the loss value is not stable and oscillates with different training times.
To demonstrate the robust characteristic of compressed input samples, the predicted testing accuracies about the magnetization orientation and magnitude with decreasing the amount of the input samples are illustrated in Figure .
[IMAGE OMITTED. SEE PDF]
Obviously, the accuracies decrease with the decrease of the input samples. Especially, when the amount of the input samples is compressed to 50, the accuracies of the magnetization orientation and magnitude are both reached 87%. The accuracies appear to be worse and less than 42%, when only 5 input samples are used in the training. In the light of guaranteeing the high accuracy of 80%, the least amount of the compressed input samples is 50.
The training and testing losses for the opto-magnetizationNet, single MLP model and tandem neural network are also plotted under the least input samples of 50, as shown in Figure . In this case of the least compressed input samples, the loss value for the opto-magnetizationNet is first lowest (about 0.5%), which for tandem neural network is second lowest (about 2%). And the loss difference between these two neural networks is substantially tiny. While for the MLP neural network, the performance of loss value is worst and appears to be excessively overfitting on the testing dataset.
[IMAGE OMITTED. SEE PDF]
Dataset Equivalence from the Opto-Magnetization Process and Artificial Collecting
In our proposed opto-magnetizationNet, the amount of the input samples can be compressed to the utmost extent. Whereas, the data collecting is still quite time-consuming, especially for the complicated physical process. With careful analysis, the real label of the output in the MLP architecture is not completely exploited in the training of the opto-magnetizationNet. In this sense, the opto-magnetizationNet is a naturally excellent neural network with no labeling. Therefore, the input samples can be artificially produced only abiding by the physical constraint of the magnetization belonging to [0,1]. In Figure , the training and testing losses for the calculated and artificial input samples are given.
[IMAGE OMITTED. SEE PDF]
The calculated input samples are obtained by the opto-magnetization process containing the vector diffraction theory and the IFE. The artificial input samples are arbitrarily acquired by choosing values uniformly distributed in the range of [0,1]. For comparison, the amount of input samples is both confined to 500 000. As seen from Figure , all the loss values decrease at a quite fast speed in the range of [0,50] epochs. The loss values on training and testing datasets are about 0.15% for the artificial input samples. For calculated input samples, the loss values are about 0.02%, which are lower than the ones for the artificial input samples. Meanwhile, the difference between the losses for these two kinds of input samples is so tiny that it can be neglected to some extent. Except for a mass of input samples, the comparison beween two kinds of input samples is carried out at the least compressed amount of the input samples. The training and testing losses for the calculated and artificial input samples are also plotted under the case of the 50 input samples, as shown in Figure .
[IMAGE OMITTED. SEE PDF]
All the loss values fastly decrease, when the epoch of the training ranges from 0 to 200. The final loss values converged to the neighborhood of 0.2%. The difference between the two kinds of input samples is too tiny to nonnegligible. Combining the training results of the abundant input samples and the least amount of input samples, the proposed opto-magnetizationNet is proved to be a deep learning neural network without the calculated input samples from opto-magnetization process.
Creation of 5D Opto-Magnetization Spot Arrays
By leveraging our devised physics-enhanced deep learning called opto-magnetizationNet, the demanded incident beam can be acquired to achieve the prescribed magnetization. A single magnetization spot with an arbitrary orientaion and magnitude can be achieved by the tight focusing of the tailored incident beam. The desired magnetization orientation components as well as magnitudes (, , , and ) and the estimated ones (, , , and ) are shown in Table . In this table, eight different magnetizations containing orientations and magnitudes are given in detail.
Table 1 Comparison between the target magnetization orientation components as well as magnitudes (, , , and ) and the estimated ones (, , , and )
Magnetization label | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
0.300 | 0.346 | 0.300 | 0.250 | 0.000 | 0.303 | 0.225 | 0.248 | |
0.172 | 0.200 | 0.000 | 0.433 | 0.000 | 0.525 | 0.130 | 0.429 | |
0.200 | 0.000 | 0.000 | 0.000 | 0.300 | 0.350 | 0.150 | 0.495 | |
0.400 | 0.400 | 0.300 | 0.500 | 0.300 | 0.700 | 0.300 | 0.700 | |
0.300 | 0.347 | 0.300 | 0.250 | 0.000 | 0.303 | 0.225 | 0.247 | |
0.172 | 0.200 | 0.000 | 0.433 | 0.000 | 0.525 | 0.130 | 0.429 | |
0.200 | 0.001 | 0.001 | 0.000 | 0.300 | 0.350 | 0.150 | 0.494 | |
0.400 | 0.400 | 0.300 | 0.500 | 0.300 | 0.700 | 0.300 | 0.700 |
It is easily found that both the magnetization components and magnitudes from the desired and estimated cases are almost the same. It is highlighted that the two properties of orientation and magnitude are 2D information of the magnetization recording. In consideration of the 3D space where the single spot locates, the 2D capacities can be expanded to 5D scales. Furthermore, the magnetization spot arrays with different magnetizations shown in Table can also be produced by point-to-point scanning or phase shifting.[] Combining the magnetization spot arrays and 5D channels together, the corresponding 5D opto-magnetization spot arrays are displayed in Figures and .
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
In Figure , the magnetization orientation varies with the spatial location as we expect. To further demonstrate magnetization details, the magnetization orientations in the label 1 location are enlarged. From this enlarged image, the magnetization orientations are almost in the same direction. It is found that the calculated magnetization orientations in different locations are in accord with the outcomes of Table . In Figure , the distributions of the magnetization magnitudes in labels 3 and 6 locations are given. The magnetization magnitudes near the focal region are about 0.30 and 0.70, respectively. These two calculated values are consistent with the desired target magnetizations in Table . Combining Figure with Figure together, each spot in opto-magnetization spot arrays has an independent 3D spatial position, vectorial orientation, and arbitrary magnitude. Undoubtedly, this special magnetization structure tremendously increases the storage capacity of the magnetization storage compared with other methods. The proposed design strategy largely boosts the development of data storage and memories.
Conclusion
To conclude, we propose the creative concept of 5D opto-magnetization possessing magnetization magnitude, orientation as well as 3D spatial space, and present the proposed opto-magnetizationNet of physics-enhanced deep learning composed of MLP architecture and opto-magnetization principles to achieve this given structure. According to the vivid RMs under the tightly focusing condition and the representative IFE of the opto-magnetization physical phenomenon, the tight focusing of a tailored radially polarized incident beam with four kinds of phase/amplitude modulations is to produce four polarized focal fields, enables the achievement of the magnetization spot with both controllable magnitude and steerable orientation. Based on the opto-magnetization process in the focal region, the datasets of the magnetizations and incident beams are accumulated. These two groups of datasets are trained by the opto-magnetizationNet. Compared with a single input sample feeding, parallel computing with high efficiency is manifested in the training of the whole input samples. In comparison with the usual neural networks of the single MLP model and tandem architecture, high accuracy, and low amount of training data are revealed in the opto-magnetizationNet. Furthermore, the opto-magnetizationNet is proven to a robust deep neural network without collected data calculated from the physical process. Essentially, 5D opto-magnetization spot arrays with arbitrary orientation and magnitude in each spot are given. More importantly, the presenting results and proposed methodology could be feasible to broad potentials in light-induced magnetization shaping containing structure and orientation as well as the manipulation of structured light.
Acknowledgements
This work was supported by the National Natural Science Foundation of China (12004155, 11974258, 11904152, 11604236, 61575139, 61504052); Jiangxi Provincial Natural Science Foundation (20212BAB214056); Key Research and Development (R&D) Projects of Shanxi Province, China (201903D121127); Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi(2019L0151). All the simulations reported in this work had been performed using the high-performance computational facilities at the Institute of Space Science and Technology of Nanchang University.
Conflict of Interest
The authors declare no conflict of interest.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
X. Fang, H. Ren, K. Li, H. Luan, Y. Hua, Q. Zhang, X. Chen, M. Gu, Adv. Opt. Photonics 2021, 13, 772.
H. Ren, X. Li, Q. Zhang, M. Gu, Science 2016, 352, 805.
X. Ouyang, Y. Xu, M. Xian, Z. Feng, L. Zhu, Y. Cao, S. Lan, B.-O. Guan, C.-W. Qiu, M. Gu, X. Li, Nat. Photonics 2021, 15, 901.
C. D. Stanciu, F. Hansteen, A. V. Kimel, A. Kirilyuk, A. Tsukamoto, A. Itoh, T. Rasing, Phys. Rev. Lett. 2007, 99, 047601.
A. Stupakiewicz, K. Szerenos, D. Afanasiev, A. Kirilyuk, A. Kimel, Nature 2017, 542, 71.
J. P. Van der Ziel, P. S. Pershan, L. D. Malmstrom, Phys. Rev. Lett. 1965, 15, 190.
A. R. Khorsand, M. Savoini, A. Kirilyuk, A. V. Kimel, A. Tsukamoto, A. Itoh, T. Rasing, Phys. Rev. Lett. 2012, 108, 127205.
D. Ignatyeva, C. Davies, D. Sylgacheva, A. Tsukamoto, H. Yoshikawa, P. Kapralov, A. Kirilyuk, V. Belotelov, A. Kimel, Nat. Commun. 2019, 10, 4786.
D. Ignatyeva, C. Davies, D. Sylgacheva, A. Tsukamoto, H. Yoshikawa, P. Kapralov, A. Kirilyuk, V. Belotelov, A. Kimel, Nat. Nanotechnol. 2014, 9, 279.
M. A. C. Moussu, L. Ciobanu, S. Kurdjumov, E. Nenasheva, B. Djemai, M. Dubois, A. G. Webb, S. Enoch, P. Belov, R. Abdeddaim, S. Glybovski, Adv. Mater. 2019, 31, 1900912.
C.-W. Qiu, L.-M. Zhou, Light Sci. Appl. 2018, 7, 86.
D. Barredo, V. Lienhard, P. Scholl, S. de Léséleuc, T. Boulier, A. Browaeys, T. Lahaye, Phys. Rev. Lett. 2020, 124, 023201.
Y. Rivenson, Y. Wu, A. Ozcan, Light Sci. Appl. 2019, 8, 85.
R. A. Gallardo, P. Alvarado-Seguel, P. Landeros, Phys. Rev. B 2022, 105, 104435.
Y. Jiang, X. Li, M. Gu, Opt. Lett. 2013, 38, 2957.
Z. Nie, W. Ding, D. Li, X. Zhang, Y. Wang, Y. Song, Opt. Express 2015, 23, 690.
S. Wang, X. Li, J. Zhou, M. Gu, Opt. Lett. 2014, 39, 5022.
W. Yan, Z. Nie, X. Zhang, Y. Wang, Y. Song, Opt. Express 2017, 25, 22268.
Z. Nie, H. Lin, X. Liu, A. Zhai, Y. Tian, W. Wang, D. Li, W. Ding, X. Zhang, Y. Song, B. Jia, Light Sci. Appl. 2017, 6, e17032.
C. Hao, Z. Nie, H. Ye, H. Li, Y. Luo, R. Feng, X. Yu, F. Wen, Y. Zhang, C. Yu, J. Teng, B. Luk'yanchuk, C.-W. Qiu, Sci. Adv. 2017, 3, 1701398.
Z. Nie, W. Ding, G. Shi, D. Li, X. Zhang, Y. Wang, Y. Song, Opt. Express 2015, 23, 21296.
W. Yan, Z. Nie, X. Zhang, Y. Wang, Y. Song, Appl. Opt. 2017, 56, 1940.
X. Liu, W. Yan, Z. Nie, Y. Liang, E. Cao, Y. Wang, Z. Jiang, Y. Song, X. Zhang, Opt. Express 2022, 30, 10354.
Z. Zhou, X. Liu, Z. Nie, H. Lin, C. Guo, B. Jia, Ann. Phys. 534, 2200127.
H. Dehez, A. April, M. Piché, Opt. Express 2012, 20, 14891.
X. Liu, W. Yan, Z. Nie, Y. Liang, Y. Wang, Z. Jiang, Y. Song, X. Zhang, Opt. Express 2021, 29, 26137.
S. Wang, Y. Cao, X. Li, Opt. Lett. 2017, 42, 5050.
W. Yan, Z. Nie, X. Liu, G. Lan, X. Zhang, Y. Wang, Y. Song, Opt. Express 2018, 26, 16824.
W. Yan, Z. Nie, X. Liu, X. Zhang, Y. Wang, Y. Song, APL Photonics 2018, 3, 116101.
W. Yan, S. Lin, H. Lin, Y. Shen, Z. Nie, B. Jia, X. Deng, Opt. Express 2021, 29, 961.
J. Luo, H. Zhang, S. Wang, L. Shi, Z. Zhu, B. Gu, X. Wang, X. Li, Opt. Lett. 2019, 44, 727.
X. Liu, W. Yan, Y. Liang, Z. Nie, Y. Wang, Z. Jiang, Y. Song, X. Zhang, Adv. Photonics Res. 2022, 3, 2100117.
R. Tang, F. Zeng, Z. Chen, J.-S. Wang, C.-M. Huang, Z. Wu, Atmosphere 2020, 11, 316.
Z. Chen, M. Jin, Y. Deng, J.-S. Wang, H. Huang, X. Deng, C.-M. Huang, J. Geophys. Res. Space Phys. 124, 790.
M. Liao, S. Zheng, S. Pan, D. Lu, W. He, G. Situ, X. Peng, Opto-Electr. Adv. 2021, 4, 05200016.
H. Ren, W. Shao, Y. Li, F. Salim, M. Gu, Sci. Adv. 2020, 6, eaaz4261.
Y. Zhang, C. Wu, Y. Song, K. Si, Y. Zheng, L. Hu, J. Chen, L. Tang, W. Gong, Opt. Express 2019, 27, 16871.
P. R. Wiecha, A. Lecestre, N. Mallet, G. Larrieu, Nat. Nanotechnol. 2019, 14, 237.
S. Krasikov, A. Tranter, A. Bogdanov, Y. Kivshar, Opto-Electr. Adv. 2022, 5, 210147.
W. Yan, Z. Nie, X. Zeng, G. Dai, M. Cai, Y. Shen, X. Deng, Ann. Phys. 2022, 534, 2100287.
F. Wang, C. Wang, C. Deng, S. Han, G. Situ, Photonics Res. 2022, 10, 104.
D. Yang, J. Zhang, Y. Tao, W. Lv, S. Lu, H. Chen, W. Xu, Y. Shi, Opt. Express 2021, 29, 31426.
F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, G. Situ, Light Sci. Appl. 2020, 9, 77.
H. Ren, X. Li, M. Gu, Opt. Lett. 2014, 39, 6771.
K. S. Youngworth, T. G. Brown, Opt. Express 2000, 7, 77.
B. Richards, E. Wolf, Proc. R. Soc. London A 1959, 253, 358.
H. Lin, B. Jia, M. Gu, Opt. Lett. 2011, 36, 406.
H. Lin, M. Gu, Appl. Phys. Lett. 2013, 102, 084103.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright John Wiley & Sons, Inc. 2023
Abstract
In the era of big data, all‐optical control of the magnetization is recognized as an alternative scheme that boosts the accelerating advance of multifunctional integrated opto‐magnetization devices with high‐density capacity. The light‐induced magnetizations demonstrated so far are devoted to steering their spatial orientations and structures by engineering the complicated phase, amplitude, and polarization modulations of incident wavefronts, which, however, confront low efficiency, weak flexibility, and limited dimension. To tackle these issues efficaciously, a novel strategy is proposed to first achieve 5D opto‐magnetization composed of 3D spatial location, vectorial orientation as well as magnitude. This relies on physics‐enhanced deep learning incorporating multilayer perceptron (MLP) artificial neural network and opto‐magnetization principles. The preeminent magnetization morphology largely expedites the improvement in multi‐dimensional storage. The proposed facile approach is time‐efficient, flexible, and accurate to attain the prescribed magnetization. Moreover, the presenting findings and proposed route are not only applied for magnetization manipulation, but also applicable to the control of the structured light field.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Institute of Space Science and Technology, Nanchang University, Nanchang, China
2 School of Information and Engineering, Nanchang University, Nanchang, China
3 Department of Physics, School of Physics and Materials Science, Nanchang University, Nanchang, China
4 School of future technology, Nanchang University, Nanchang, China
5 Key Lab of Advanced Transducers and Intelligent Control System, Ministry of Education and Shanxi Province, College of Physics and Optoelectronics, Taiyuan University of Technology, Taiyuan, China