Each week, we post a new GRE Challenge Problem for you to attempt. If you submit the correct answer, you will be entered into that week’s drawing for two free Manhattan Prep GRE Strategy Guides.

Positive integer n leaves a remainder of 2 after division by 6 and a remainder of 4 after division by 5. What is the remainder when n is divided by 30?

We hope everyone had a happy Halloween! Yesterday we asked our friends on our Manhattan GRE Facebook page to attempt this Trick-or-Treat Halloween Challenge Problem. As promised, today we are sharing the answer and explanation to the problem:

Let’s use x for the number of bags produced by the original recipe, and for the weight of each of the bags. Given those variables, our first equation is simply xy = 600. We also need to create an equation that represents the new recipe. Since the number of bags produced has increased by 30, and the weight of each bag has decreased by 1, the new equation is (x + 30)(y – 1) = 600. Remember, the total weight is still 600 ounces. Foiling this equation yields xy – x + 30y – 30 = 600.

We now have two equations with two variables. There are several different paths we can go down here, but all involve substitution of one of the variables, and all will yield a quadratic. The simplest path is to recognize that since xy = 600, we can substitute for xy in the second equation to get 600 – x + 30y – 30 = 600. Subtracting the 600 from both sides, and adding an x to each side gives us 30y – 30 = x. We can now substitute for x in the first equation.

Welcome to part 4 of the article series on analyzing your GRE practice tests. As we discussed in the first, second, and third parts of this series, we’re basing the discussion on the metrics that are given in Manhattan Prep tests, but you can extrapolate to other tests that give you similar performance data. If you haven’t already read those, do so before you continue with this final part.

In the first part, we discussed how to assess the data provided in the “question list”—the list that shows the questions you received and how you performed on each one. In the second and third parts, we analyzed the data in our Assessment Reports.

Today, we’ll do a final bit of analysis that will help us study all of these weaknesses that we’ve been uncovering.

## Getting Started

You can do the following analysis on the one test that you just took, but I generally recommend running the Assessment Reports on your last 2 or 3 tests for this last step. Do this if you have other tests that you have taken in the past 6 weeks or so.

Two notes before we begin:

(1) When I refer to “percent correct” below, everything is relative to your own performance. If you answer 60% correctly but other categories are at 50%, then this category falls into “I get these right.” If you answer 60% correctly but other categories are at 70%, then this category falls into “I get these wrong.”

(2) The “too fast” and “too slow” designations are based on the timing benchmarks I gave you in the first part of this article series.

You generally want to place question types and topic areas into one of the following five groups.

## Group 1. I get these right roughly within the expected timeframe

Definition: Your percent correct is at the higher end of your range* and your average time is neither way too fast nor way too slow.

Going forward, these areas are not high on your priority list, but there may still be things you can learn: faster ways to do the problem; ways to make educated guesses (so that you can use the thought process on harder problems of the same type); how to quickly recognize future problems of the same type.

Make sure that you actually knew what you were doing for each problem and didn’t just get lucky! Finally, you may want to move on to more advanced material in these areas. Continue Reading…

Welcome to part 3 of the article series on analyzing your GRE practice tests. As we discussed in the first and second parts of this series, we’re basing the discussion on the metrics that are given in Manhattan Prep tests, but you can extrapolate to other tests that give you similar performance data. If you haven’t already read those, do so before you continue with this third part.

In the first part, we discussed how to assess the data provided in the “question list”—the list that shows the questions you received and how you performed on each one. In the second part, we began analyzing the data in our Assessment Reports. We’re going to continue with that task today.

Last time, we covered the first of five Assessment Reports that you can generate in the testing system. Today, we’ll cover the final four reports.

## Quantitative by Question Format and Difficulty

The second report shows your quant performance by Question Format and Difficulty. You will already have some ideas about your performance from your initial analysis; now, you’re seeing whether this data confirms what you already suspect and whether you can pick up any additional nuance from this more detailed report.

In general, performance drops as questions get harder, so you would expect to have the highest accuracy on the Easier problems, dropping to your lowest accuracy on the Devilish problems. Check to see whether this trend holds or whether the data is surprising.

Our pretend student generally followed this expected trend for Quant and QC questions: as the questions got harder, her average performance dropped. (We should really give her a name. Let’s call her… how about Cathy?) On the other hand, Cathy missed a couple of easier DI questions even though she answered harder DI questions correctly. Hmm! Maybe she made some careless mistakes on something she did know how to do, or maybe the questions tested something that she didn’t know but that she could learn without too much trouble. She’ll need to dig into the individual questions to find out, but it certainly looks like there’s a good opportunity here for her to pick up some points.

Do you see the one data point that really jumps out here? Cathy spent 8.5 minutes on a single DI question! Except for that, her DI timing was right on target. She needs to make sure she’s got a mechanism in place to cut herself off so that she NEVER takes anywhere near that much time on a single question again!

Other than that, this data tells us that what we already hypothesized earlier is likely on target.

(Note: there is also a “medium-low” difficulty category, but Cathy happened not to get any math questions in that group.)

## Verbal by Question Format and Difficulty

Now do the same thing for verbal! (Note: some people prefer to do all of the quant analysis and then do all of the verbal analysis; feel free to do the analysis in whatever order makes the most sense to you.) Continue Reading…

Welcome to part 2 of the process for analyzing your GRE practice tests. As we discussed in the first part of this series, we’re basing the discussion on the metrics that are given in Manhattan Prep tests, but you can extrapolate to other tests that give you similar performance data.

Last time, we discussed how to assess the data provided in the “question list”—the list that shows the questions you received and how you performed on each one. This week, we’re going to interpret the analysis given in the Assessment Reports.

When you log into your Manhattan Prep student center, you’ll be on the Exam Page. Click the link titled “Generate Assessment Reports.” Make sure all of the reports are checked and then choose your most recent (single) test. Finally, click “Generate.”

## Assessment Summary

The first report produced is the Assessment Summary; this report summarizes your performance across accuracy, timing, and difficulty level.

The top half of the report shows the six main question types on the GRE. Take a look at this fictional example (you may need to zoom in to see the details below):

First, examine the three quant types (the first three rows). It’s important to look at the three categories of data (accuracy, timing, difficulty) collectively.

On Quant questions, the student has a decent percent correct but she’s rushing and the average difficulty level on correct answers is the lowest of the three categories. She might be able to improve her time by slowing down a little on these kinds of questions (and making fewer mistakes).

The student seems to be struggling a bit more on Quantitative Comparison (QC). This type is her lowest percent correct but she’s spending about the right amount of time (so that lower percent correct is not due to rushing). The difficulty levels are a little higher, so it’s logical that the percent correct would be a little lower—but she’s still struggling a bit more with QC.

It’s a bit surprising that incorrect QCs are slightly faster than correct QCs, given that the average difficulty level of incorrect QCs is so much higher than the average difficulty level of correct QCs. In general, harder questions should take longer. Either this student did a great job of recognizing that a question was too hard and appropriately cut herself off, or she rushed a bit too much and possibly cost herself some points. She would have to look at the individual questions to figure out why. Continue Reading…