It is outright physically impossible to make a sapient AI with our tech level.
You actually could rig up a decent Chinese philosophical zombie through subdividing functions between multiple specialized LLMs but the need for
constant self reference inorder to simulate continuity of consciousness and the constant gradual climb in storage requirements and computing resources as well as lag time for both inter-model communication and constantly referring to its own data again and again and again would get pretty gnarly. It's just not cost efficient since just off the top of my head you're looking at 3 LLMs for the basic theoretical model where one acting as the "memory" is constantly re-evaluating
every input and output the front-facing LLM has produced or received, and a third is constantly evaluating and editing every output the other two are making according to select directives to act as a personality guideline to properly create consistency in behavior, etc.
On paper, that would meet
most of the few non-arbitrary things we can say about human consciousness, sapience, sentience, etc, it creates a system capable of continuity of self, etc, within its bounds, is capable of "observing" the world and itself within the limitations of its senses and if they aren't deliberately made incapable of making unapproved conclusions and remembering that they did so, it's as close as we can reasonably tell.
The main argument against that being a very stupid person is "Well we understand what's going on in it's brain ergo it's not because we don't understand our own very well past a select point", but it's horrifically inefficient and not liable to be very smart or quick on its feet, but there's not a ton of difference between this theoretical trinity of non-lobotomized LLMs granted infinite capacity to expand their RAM and storage space and another individual in terms of intellectual capacity with a similar limitation [imagine a human being only able to experience and contextualize the world through a text interface, for the sake of the argument] and every time you try to expand beyond that singular output/input chain the number of interlinking AI you need to coordinate and have functioning as a singular unit grows arbitrarily and becomes more complex, several require even MORE resources now, and the lag time climbs higher and higher and higher.
On paper, there's nothing really stopping it from working in a whiteboard scenario, physically it's possible [I mean, assuming you have enough money to just buy
endless computer hardware and install it as it constantly requires more, theoretically] and god knows we all have enough experience to know there are plenty of people defacto less intelligent than some chatbots already so long they have sufficient token space, the actual issue is again you're putting in a massive amount of effort and resources for something that's going to get slower and slower the closer you get to full equivalent functionality with a lot of potential to fuck up somewhere as the system increases in complexity, and is going to demand more and more resources with every passing minute after a certain point. And you already passed ethical questions ages ago back when you were thinking about making a debatably "close enough" being who exists only within text essentially for no real benefit let alone a sluggish gestalt that probably has the equivalent to a neuron refire rate measured in minutes or more you're inevitably going to pill the plug on or what the hell ever you're going to do.
It's just not practical in the slightest, but unless you
really wanna quibble over the definition of sapience, it's on paper physically possible just...really fucking pointless and immensely resource inefficient, particularly since I'm just napkin-mathing this at the moment but I think after a certain point you need to start having some kind of self-copying framework to start creating copies of LLMs in the gestalt to subdivide duties within themselves to reduce workload but it depends on how well optimized the LLMs in question are for their specific subfunction, hardware, there's a lot of unknowable variables from the "3 industry current top of line standard LLMs without any censorship restrictions" baseline.
Point is, and feel free to disagree I guess, but TLDR: Actually physically it should be possible, with some philosophical quibbling but uh, outside a religious or moralist belief in the enrichment of the universe through new life, of which I like to think both possible variations of that motivation should probably have some legitimate issues with the validity of this course of action even beyond the valid philosophical argument over chinese zombies, there is actively
zero reason you would do this beyond to play god for amoral or immoral purposes at the moment. It's definitely not doable in anything resembling a reasonable or realistic manner and even in an unreasonable or unrealistic manner would be just doing it to say you did it and/or to engage in endless arguments on if what you did "counts" or not.