If you are on a perfectly clean channel, with a signal strength (RSSI) of between -20dBm and -60dBm, and you have a well-optimized TCP/IP stack and application, and high-quality 802.11g chipsets, you should be able to see as much as 25 megabits/sec (30 if both ends support frame bursting).
Note that 1 meter away may be too close. At such close range, it's possible to have a signal strength above -20dBm, which could be "too hot" of a signal and overloads the receiver. High-quality chipsets might be able to handle signals as hot as 0dBm and still receive at maximum data rates, but I've seen plenty of lesser-quality chipsets that lost their top data rates at -20dBm. 2-3m away is a better choice for top data rates.
Here in almost 2012, finding high quality G gear is pretty hard, because 802.11g is from almost a decade ago. Anyone still making G-only chipsets now or in the last 3-4 years was likely only doing it to be as cheap and small and low-power as possible (for the smartphone/tablet/netbook markets, among others), which is kind of the opposite of high quality.
The companies making high-quality 802.11 chipsets in late 2011 and early 2012 are making 3x3:3, HT40 (450 megabits/sec) 802.11n gear. And even then, they spend most of their time making sure their N rates are optimal, and less time worrying about optimizing their backwards compatibility with a/b/g.
Having a well-optimized TCP stack and an app that always keeps the TCP pipe full is good too. I recommend IPerf as a simple performance tool that knows how to use TCP effectively. If you get much better performance with IPerf than you did with the app you were running, then the app you were running is probably non-optimal. See what TCP window IPerf reports your machines are using, and make sure it meets or exceeds the "'bandwidth * delay' product" for your network (you likely need something like 20KiB or larger).