The second question: It’s said that by 2045 and 2050, AI will reach a technological turning point beyond human capabilities, which will have comprehensive impacts on humanity. Is it possible that the ethical and moral issues of AI we’re concerned about today won’t necessarily happen as predicted?