According to Schmelzer (Feb., 2020), there are at least 45 Machine Translation (MT) companies operating around the world, letting alone the super cloud vendors such as Amazon, Google, Microsoft, and Facebook, etc. Although the accuracy of machine translation has always been a challenge, the recent technological advances seem quite striking. Among those, an AI-enhanced Google Neural Machine Translation (GNMT) is perhaps the most versatile MT in the world, as many have accepted. In 1999, Jackendoff has argued that no matter how simple grammatical matters might seem at first sight, even the most advanced computers are no match for the abilities of the human brain. Jackendoff (1999), not having done any actual implementation of his data-set, Hong (2020) has tested what is referred to as, “Jackendoff’s problem”, using AI-based Google Neural Translate (or GNMT). In this paper, on the basis of Hong’s pivot study, it is confirmed that GNMT makes a series of errors in its phonetic outputs of particular words in some specific constructions. As a sequel of Hong’s study, the current paper thoroughly compares English Causative Imperatives and Perfective Aspectual Interrogatives, arriving at the conclusion that in spite of the phonetic errors that GNMT has produced, the corresponding semantic interpretations are normal and non-erroneous. It is argued that GNMT’ erroneous phonetic outputs are closely related to the structural ambiguity between the causative imperative construction and the perfective aspectual interrogatives, whereas its matching semantics is with no such errors. The implications of this research apparently show that GNMT, as the way it is at least, has separate parsing algorithms for semantics from its phonetic/phonological components.