The study explores whether a Large Language Model (LLM) like OpenAI's GPT can assist human readers in digesting scientific reviews by generating a "meta review" that provides an accept/reject recommendation along with the confidence level and explanation. The investigation uses the NeurIPS 2022 conference data, where each paper has 3 to 6 general reviews, along with a meta review giving the final decision based on these reviews. The results show that GPT is able to generate useful review summaries and "better than chance" accept/reject recommendations. However, it tends to lean towards acceptance and hesitates to recommend rejection. Guiding the AI via directive and indirective prompts improves its accuracy for rejected papers. The absence of reviewer ratings significantly affects AI accuracy, highlighting their importance. While AI holds promise in aiding peer reviews, challenges remain in refining its decision-making processes and addressing biases.